Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?
有时我看到Θ(n)和中间的奇怪Θ象征的东西,有时是O(n)。是因为没有人知道如何输入这个符号,还是因为它的意思不同?
9 个解决方案
#1
533
Short explanation:
If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).
如果一个算法的Θ(g(n)),这意味着该算法的运行时间n(输入大小)变大正比于g(n)。
If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).
如果一个算法是O(g(n)),这意味着当n变大时算法的运行时间最多与g(n)成正比。
Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.
通常情况下,即使人们谈论O(g(n))他们实际上意味着Θ(g(n)),但从技术上讲,这两者是不同的。
More technically:
O(n) represents upper bound. Θ(n) means tight bound. Ω(n) represents lower bound.
O(n)代表上界。Θ(n)意味着紧密绑定。Ω(n)代表下界。
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and f(x) = Ω(g(x))
f(x)=Θ(g(x))敌我识别f(x)= O(g(x))和f(x)=Ω(g(x))
Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).
当我们说一个算法是O(n)的时候,它也是O(n2) O(n1000000) O(2n)但Θ(n)的算法不是Θ(n2)。
In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,
事实上,自从f(n)=Θ(g(n))意味着足够大的值n,f(n)可以绑定在c1g(n)和c2g(n)c1和c2的一些值,即f是渐近的增长率等于g:g可以下界和上界的f。这直接意味着f可以下界和上界的g。因此,
f(x) = Θ(g(x)) iff g(x) = Θ(f(x))
f(x)=Θg(g(x))敌我识别(x)=Θ(f(x))
Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,
同样,显示f(n)=Θ(g(n)),这足以显示g的上界f(f(n)= O(g(n)))和f是一个下界的g(即f(n)=Ω(g(n))一样的g(n)= O(f(n)))。简洁,
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))
f(x)=Θ(g(x))敌我识别f(x)= O(g(x))和g(x)= O(f(x))
There are also little-oh and little-omega (ω
) notations representing loose upper and loose lower bounds of a function.
还有别的和ω(ω)符号代表宽松的上层和宽松的函数的下界。
To summarize:
总结:
f(x) = O(g(x))
(big-oh) means that the growth rate off(x)
is asymptotically less than or equal to to the growth rate ofg(x)
.f(x) = O(g(x))) (big-oh)表示f(x)的增长率渐近小于或等于g(x)的增长率。
f(x) = Ω(g(x))
(big-omega) means that the growth rate off(x)
is asymptotically greater than or equal to the growth rate ofg(x)
f(x)=Ω(g(x))(ω)意味着f(x)是渐近的增长率大于或等于g(x)的增长率
f(x) = o(g(x))
(little-oh) means that the growth rate off(x)
is asymptotically less than the growth rate ofg(x)
.f(x) = o(g(x))) (little-oh)表示f(x)的生长速率渐近小于g(x)的生长速率。
f(x) = ω(g(x))
(little-omega) means that the growth rate off(x)
is asymptotically greater than the growth rate ofg(x)
f(x)=ω(g(x))(ω)意味着f(x)是渐近的增长率大于g(x)的增长率
f(x) = Θ(g(x))
(theta) means that the growth rate off(x)
is asymptotically equal to the growth rate ofg(x)
f(x)=Θ(g(x))(θ)的增长率意味着f(x)渐近等于g(x)的增长率
For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.
对于更详细的讨论,您可以阅读Wikipedia上的定义,或者参考Cormen等人的经典教科书。
#2
285
There's a simple way (a trick, I guess) to remember which notation means what.
有一个简单的方法(我猜是一个技巧)来记住哪个符号代表什么。
All of the Big-O notations can be considered to have a bar.
所有的大o符号都可以被认为有一个条形。
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
当看着Ω,酒吧是在底部,因此它是一个渐近下界。
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
看着Θ时,显然是在中间。所以它是一个(渐近的)紧密的边界。
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
当书写的时候,你通常在顶部完成,然后画一个弯角。因此O(n)是函数的上界。说句公道话,这个字体并不适用于大多数字体,但它是名称的最初理由。
#3
52
one is Big "O"
一个是大“O”
one is Big Theta
一个是大θ
http://en.wikipedia.org/wiki/Big_O_notation
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
O意味着你的算法步骤将执行不超过给定的表达式(n ^ 2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
大ω意味着你的算法将执行更少的步骤比在给定的表达式(n ^ 2)
When both condition are true for the same expression, you can use the big theta notation....
当两个条件适用于相同的表达式,可以使用大θ符号....
#4
33
Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:
我将给出一个简单的例子:
Assume the run time of f(i)
is O(1)
. Below is a code fragment whose asymptotic runtime is Θ(n)
. It always calls the function f(...)
n
times. Both the lower and the upper bound is n.
假设f(i)的运行时间为O(1)。下面是代码片段的渐近运行时Θ(n)。它总是调用函数f(…)n次。下界和上界都是n。
for(int i=0; i<n; i++){
f(i);
}
The second code fragment below has the asymptotic runtime of O(n)
. It calls the function f(...)
at most n
times. The upper bound is n, but the lower bound could be Ω(1)
or Ω(log(n))
, depending on what happens inside f2(i)
.
下面的第二个代码片段具有O(n)的渐近运行时。它在大多数n次调用函数f(…)上界是n,但下界可以Ω(1)或Ω(log(n)),这取决于发生在f2(我)。
for(int i=0; i<n; i++){
if( f2(i) ) break;
f(i);
}
#5
10
A chart could make the previous answers easier to understand:
图表可以使之前的答案更容易理解:
Θ-Notation - Same order | O-Notation - Upper bound
In English,
在英语中,
On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n)
). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say g(n)
is the Theta
of f(n)
.
在左边,注意有一个上界和一个下界,它们都具有相同的数量级(即g(n))。忽略常数,如果上界和下界的数量级相同,我们就可以说g(n)是f(n)的。
Starting with the right, the simpler example, it is saying the upper bound (i.e. big O
) of f(n)
is g(n)
. Note that g(n)
is simply the order of magnitude and ignores the constant c
(just as all big O
notation does).
从右边开始,最简单的例子是,f(n)的上界(即大O)是g(n)注意g(n)仅仅是数量级,忽略了常数c(就像所有大O符号一样)。
#6
9
Theta is a shorthand way of referring to a special situtation where the big O and Omega are the same.
是一种简写方式,表示一个特殊的情况,大O和都是一样的。
Thus, if one claims The Theta is expression q
, then they are also necessarily claiming that Big O is expression q
and Omega is expression q
.
因此,如果一个人声称是表达式q,那么他们也必然宣称大O是表达式q,和是表达式q。
Rough analogy:
粗糙的类比:
If: Theta claims, "That animal has 5 legs." then it follows that: Big O is true ("That animal has less than or equal to 5 legs.") and Omega is true("That animal has more than or equal to 5 legs.")
如果:“那个动物有5条腿”,那么它就会这样说:“大O”是真的(“那只动物的腿小于或等于5条腿”),而且是正确的(“那只动物有超过或等于5条腿”)。
It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).
这只是一个粗略的类比,因为表达式不一定是具体的数字,而是功能等不同数量级的log(n),n,n ^ 2,(等等)。
#7
5
f(n)
belongs to O(n)
if exists positive k
as f(n)<=k*n
f(n)属于O(n)如果存在正k为f(n)<=k*n
f(n)
belongs to Θ(n)
if exists positive k1
, k2
as k1*n<=f(n)<=k2*n
f(n)属于Θ(n)如果存在正k1,k2 k1 * n < = f(n)< = k2 * n
Wikipedia article on Big O Notation
*关于大O符号的文章
#8
3
Conclusion: we regard big O, big θ and big Ω as the same thing.
结论:我们认为大啊,大θ和大Ω一样。
Why? I will tell the reason below:
为什么?我将告诉你下面的理由:
Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.
首先,我要澄清一个错误的声明中,有些人认为我们只是关心最坏时间复杂度,所以我们总是用大O代替大θ。我得说这个人在拉屎。上下界用来描述一个函数,而不是用来描述时间复杂度。最坏时间函数有上界和下界;最佳时间函数也有上界和下界。
In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:
为了解释清楚之间的关系大O和θ,我将解释之间的关系大O和小阿。从这个定义,我们可以很容易地知道小阿的一个子集大o .例如:
T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!
T(n)= n ^ 2 + n,我们可以说T(n)= O(n ^ 2)、T(n)= O(n ^ 3)、T(n)= O(n ^ 4)。但对于小啊,T(n)= o(n ^ 2)不符合定义的小o。所以就T(n)= o(n ^ 3)、T(n)= o(n ^ 4)是正确的为小o。冗余T(n)= o(n ^ 2)是什么?这是大θ!
Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.
通常,我们说大O O(n ^ 2),几乎说T(n)= O(n ^ 3)、T(n)= O(n ^ 4)。为什么?因为我们认为大O大θ下意识的。
Similarly, we also regard big Ω as big θ subconsciously.
同样的,我们也认为大Ωθ下意识的一样大。
In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.
在一个词,大啊,大θ和大Ω从定义不是一回事,但他们在我们的嘴巴和大脑是一样的。
#9
2
Using limits
Let's consider f(n) > 0
and g(n) > 0
for all n
. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)
) instead of the absolute value (|f(n)|
).
让我们考虑f(n) > 0和g(n) > 0对于所有n,考虑这个是可以的,因为最快的实数算法至少有一个操作,并且在开始后完成它的执行。这将简化微积分,因为我们可以用(f(n))代替绝对值(|f(n)|)
-
f(n) = O(g(n))
General:
一般:
f(n) 0 ≤ lim ──────── < ∞ n➜∞ g(n)
For
g(n) = n
:g(n)= n:
f(n) 0 ≤ lim ──────── < ∞ n➜∞ n
Examples:
例子:
Expression Value of the limit ------------------------------------------------ n = O(n) 1 1/2*n = O(n) 1/2 2*n = O(n) 2 n+log(n) = O(n) 1 n = O(n*log(n)) 0 n = O(n²) 0 n = O(nⁿ) 0
Counterexamples:
反例:
Expression Value of the limit ------------------------------------------------- n ≠ O(log(n)) ∞ 1/2*n ≠ O(sqrt(n)) ∞ 2*n ≠ O(1) ∞ n+log(n) ≠ O(log(n)) ∞
-
f(n) = Θ(g(n))
General:
一般:
f(n) 0 < lim ──────── < ∞ n➜∞ g(n)
For
g(n) = n
:g(n)= n:
f(n) 0 < lim ──────── < ∞ n➜∞ n
Examples:
例子:
Expression Value of the limit ------------------------------------------------ n = Θ(n) 1 1/2*n = Θ(n) 1/2 2*n = Θ(n) 2 n+log(n) = Θ(n) 1
Counterexamples:
反例:
Expression Value of the limit ------------------------------------------------- n ≠ Θ(log(n)) ∞ 1/2*n ≠ Θ(sqrt(n)) ∞ 2*n ≠ Θ(1) ∞ n+log(n) ≠ Θ(log(n)) ∞ n ≠ Θ(n*log(n)) 0 n ≠ Θ(n²) 0 n ≠ Θ(nⁿ) 0
#1
533
Short explanation:
If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).
如果一个算法的Θ(g(n)),这意味着该算法的运行时间n(输入大小)变大正比于g(n)。
If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).
如果一个算法是O(g(n)),这意味着当n变大时算法的运行时间最多与g(n)成正比。
Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.
通常情况下,即使人们谈论O(g(n))他们实际上意味着Θ(g(n)),但从技术上讲,这两者是不同的。
More technically:
O(n) represents upper bound. Θ(n) means tight bound. Ω(n) represents lower bound.
O(n)代表上界。Θ(n)意味着紧密绑定。Ω(n)代表下界。
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and f(x) = Ω(g(x))
f(x)=Θ(g(x))敌我识别f(x)= O(g(x))和f(x)=Ω(g(x))
Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).
当我们说一个算法是O(n)的时候,它也是O(n2) O(n1000000) O(2n)但Θ(n)的算法不是Θ(n2)。
In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,
事实上,自从f(n)=Θ(g(n))意味着足够大的值n,f(n)可以绑定在c1g(n)和c2g(n)c1和c2的一些值,即f是渐近的增长率等于g:g可以下界和上界的f。这直接意味着f可以下界和上界的g。因此,
f(x) = Θ(g(x)) iff g(x) = Θ(f(x))
f(x)=Θg(g(x))敌我识别(x)=Θ(f(x))
Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,
同样,显示f(n)=Θ(g(n)),这足以显示g的上界f(f(n)= O(g(n)))和f是一个下界的g(即f(n)=Ω(g(n))一样的g(n)= O(f(n)))。简洁,
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))
f(x)=Θ(g(x))敌我识别f(x)= O(g(x))和g(x)= O(f(x))
There are also little-oh and little-omega (ω
) notations representing loose upper and loose lower bounds of a function.
还有别的和ω(ω)符号代表宽松的上层和宽松的函数的下界。
To summarize:
总结:
f(x) = O(g(x))
(big-oh) means that the growth rate off(x)
is asymptotically less than or equal to to the growth rate ofg(x)
.f(x) = O(g(x))) (big-oh)表示f(x)的增长率渐近小于或等于g(x)的增长率。
f(x) = Ω(g(x))
(big-omega) means that the growth rate off(x)
is asymptotically greater than or equal to the growth rate ofg(x)
f(x)=Ω(g(x))(ω)意味着f(x)是渐近的增长率大于或等于g(x)的增长率
f(x) = o(g(x))
(little-oh) means that the growth rate off(x)
is asymptotically less than the growth rate ofg(x)
.f(x) = o(g(x))) (little-oh)表示f(x)的生长速率渐近小于g(x)的生长速率。
f(x) = ω(g(x))
(little-omega) means that the growth rate off(x)
is asymptotically greater than the growth rate ofg(x)
f(x)=ω(g(x))(ω)意味着f(x)是渐近的增长率大于g(x)的增长率
f(x) = Θ(g(x))
(theta) means that the growth rate off(x)
is asymptotically equal to the growth rate ofg(x)
f(x)=Θ(g(x))(θ)的增长率意味着f(x)渐近等于g(x)的增长率
For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.
对于更详细的讨论,您可以阅读Wikipedia上的定义,或者参考Cormen等人的经典教科书。
#2
285
There's a simple way (a trick, I guess) to remember which notation means what.
有一个简单的方法(我猜是一个技巧)来记住哪个符号代表什么。
All of the Big-O notations can be considered to have a bar.
所有的大o符号都可以被认为有一个条形。
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
当看着Ω,酒吧是在底部,因此它是一个渐近下界。
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
看着Θ时,显然是在中间。所以它是一个(渐近的)紧密的边界。
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
当书写的时候,你通常在顶部完成,然后画一个弯角。因此O(n)是函数的上界。说句公道话,这个字体并不适用于大多数字体,但它是名称的最初理由。
#3
52
one is Big "O"
一个是大“O”
one is Big Theta
一个是大θ
http://en.wikipedia.org/wiki/Big_O_notation
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
O意味着你的算法步骤将执行不超过给定的表达式(n ^ 2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
大ω意味着你的算法将执行更少的步骤比在给定的表达式(n ^ 2)
When both condition are true for the same expression, you can use the big theta notation....
当两个条件适用于相同的表达式,可以使用大θ符号....
#4
33
Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:
我将给出一个简单的例子:
Assume the run time of f(i)
is O(1)
. Below is a code fragment whose asymptotic runtime is Θ(n)
. It always calls the function f(...)
n
times. Both the lower and the upper bound is n.
假设f(i)的运行时间为O(1)。下面是代码片段的渐近运行时Θ(n)。它总是调用函数f(…)n次。下界和上界都是n。
for(int i=0; i<n; i++){
f(i);
}
The second code fragment below has the asymptotic runtime of O(n)
. It calls the function f(...)
at most n
times. The upper bound is n, but the lower bound could be Ω(1)
or Ω(log(n))
, depending on what happens inside f2(i)
.
下面的第二个代码片段具有O(n)的渐近运行时。它在大多数n次调用函数f(…)上界是n,但下界可以Ω(1)或Ω(log(n)),这取决于发生在f2(我)。
for(int i=0; i<n; i++){
if( f2(i) ) break;
f(i);
}
#5
10
A chart could make the previous answers easier to understand:
图表可以使之前的答案更容易理解:
Θ-Notation - Same order | O-Notation - Upper bound
In English,
在英语中,
On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n)
). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say g(n)
is the Theta
of f(n)
.
在左边,注意有一个上界和一个下界,它们都具有相同的数量级(即g(n))。忽略常数,如果上界和下界的数量级相同,我们就可以说g(n)是f(n)的。
Starting with the right, the simpler example, it is saying the upper bound (i.e. big O
) of f(n)
is g(n)
. Note that g(n)
is simply the order of magnitude and ignores the constant c
(just as all big O
notation does).
从右边开始,最简单的例子是,f(n)的上界(即大O)是g(n)注意g(n)仅仅是数量级,忽略了常数c(就像所有大O符号一样)。
#6
9
Theta is a shorthand way of referring to a special situtation where the big O and Omega are the same.
是一种简写方式,表示一个特殊的情况,大O和都是一样的。
Thus, if one claims The Theta is expression q
, then they are also necessarily claiming that Big O is expression q
and Omega is expression q
.
因此,如果一个人声称是表达式q,那么他们也必然宣称大O是表达式q,和是表达式q。
Rough analogy:
粗糙的类比:
If: Theta claims, "That animal has 5 legs." then it follows that: Big O is true ("That animal has less than or equal to 5 legs.") and Omega is true("That animal has more than or equal to 5 legs.")
如果:“那个动物有5条腿”,那么它就会这样说:“大O”是真的(“那只动物的腿小于或等于5条腿”),而且是正确的(“那只动物有超过或等于5条腿”)。
It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).
这只是一个粗略的类比,因为表达式不一定是具体的数字,而是功能等不同数量级的log(n),n,n ^ 2,(等等)。
#7
5
f(n)
belongs to O(n)
if exists positive k
as f(n)<=k*n
f(n)属于O(n)如果存在正k为f(n)<=k*n
f(n)
belongs to Θ(n)
if exists positive k1
, k2
as k1*n<=f(n)<=k2*n
f(n)属于Θ(n)如果存在正k1,k2 k1 * n < = f(n)< = k2 * n
Wikipedia article on Big O Notation
*关于大O符号的文章
#8
3
Conclusion: we regard big O, big θ and big Ω as the same thing.
结论:我们认为大啊,大θ和大Ω一样。
Why? I will tell the reason below:
为什么?我将告诉你下面的理由:
Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.
首先,我要澄清一个错误的声明中,有些人认为我们只是关心最坏时间复杂度,所以我们总是用大O代替大θ。我得说这个人在拉屎。上下界用来描述一个函数,而不是用来描述时间复杂度。最坏时间函数有上界和下界;最佳时间函数也有上界和下界。
In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:
为了解释清楚之间的关系大O和θ,我将解释之间的关系大O和小阿。从这个定义,我们可以很容易地知道小阿的一个子集大o .例如:
T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!
T(n)= n ^ 2 + n,我们可以说T(n)= O(n ^ 2)、T(n)= O(n ^ 3)、T(n)= O(n ^ 4)。但对于小啊,T(n)= o(n ^ 2)不符合定义的小o。所以就T(n)= o(n ^ 3)、T(n)= o(n ^ 4)是正确的为小o。冗余T(n)= o(n ^ 2)是什么?这是大θ!
Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.
通常,我们说大O O(n ^ 2),几乎说T(n)= O(n ^ 3)、T(n)= O(n ^ 4)。为什么?因为我们认为大O大θ下意识的。
Similarly, we also regard big Ω as big θ subconsciously.
同样的,我们也认为大Ωθ下意识的一样大。
In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.
在一个词,大啊,大θ和大Ω从定义不是一回事,但他们在我们的嘴巴和大脑是一样的。
#9
2
Using limits
Let's consider f(n) > 0
and g(n) > 0
for all n
. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)
) instead of the absolute value (|f(n)|
).
让我们考虑f(n) > 0和g(n) > 0对于所有n,考虑这个是可以的,因为最快的实数算法至少有一个操作,并且在开始后完成它的执行。这将简化微积分,因为我们可以用(f(n))代替绝对值(|f(n)|)
-
f(n) = O(g(n))
General:
一般:
f(n) 0 ≤ lim ──────── < ∞ n➜∞ g(n)
For
g(n) = n
:g(n)= n:
f(n) 0 ≤ lim ──────── < ∞ n➜∞ n
Examples:
例子:
Expression Value of the limit ------------------------------------------------ n = O(n) 1 1/2*n = O(n) 1/2 2*n = O(n) 2 n+log(n) = O(n) 1 n = O(n*log(n)) 0 n = O(n²) 0 n = O(nⁿ) 0
Counterexamples:
反例:
Expression Value of the limit ------------------------------------------------- n ≠ O(log(n)) ∞ 1/2*n ≠ O(sqrt(n)) ∞ 2*n ≠ O(1) ∞ n+log(n) ≠ O(log(n)) ∞
-
f(n) = Θ(g(n))
General:
一般:
f(n) 0 < lim ──────── < ∞ n➜∞ g(n)
For
g(n) = n
:g(n)= n:
f(n) 0 < lim ──────── < ∞ n➜∞ n
Examples:
例子:
Expression Value of the limit ------------------------------------------------ n = Θ(n) 1 1/2*n = Θ(n) 1/2 2*n = Θ(n) 2 n+log(n) = Θ(n) 1
Counterexamples:
反例:
Expression Value of the limit ------------------------------------------------- n ≠ Θ(log(n)) ∞ 1/2*n ≠ Θ(sqrt(n)) ∞ 2*n ≠ Θ(1) ∞ n+log(n) ≠ Θ(log(n)) ∞ n ≠ Θ(n*log(n)) 0 n ≠ Θ(n²) 0 n ≠ Θ(nⁿ) 0