之间的区别是什么Θ(n)和O(n)?

时间:2022-04-17 13:22:57

Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?

有时我看到Θ(n)和中间的奇怪Θ象征的东西,有时是O(n)。是因为没有人知道该如何键入这个符号,或者它是否意味着什么不同?

9 个解决方案

#1


529  

Short explanation:

If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).

如果一个算法的Θ(g(n)),这意味着该算法的运行时间n(输入大小)变大正比于g(n)。

If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).

如果一个算法是O(g(n)),则意味着算法的运行时间随着n的增大而增大,与g(n)成正比。

Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.

正常情况下,即使人们谈论O(g(n))实际上是指(g(n)),但从技术上讲,这是有区别的。


More technically:

O(n) represents upper bound. Θ(n) means tight bound. Ω(n) represents lower bound.

O(n)代表上界。Θ(n)意味着紧密绑定。Ω(n)代表下界。

f(x) = Θ(g(x)) iff f(x) = O(g(x)) and f(x) = Ω(g(x))

f(x)=Θ(g(x))敌我识别f(x)= O(g(x))和f(x)=Ω(g(x))

Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).

基本上,当我们说一个算法是O(n)的时候,它也是O(n2), O(n1000000), O(2n),…但Θ(n)的算法不是Θ(n2)。

In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,

事实上,自从f(n)=Θ(g(n))意味着足够大的值n,f(n)可以绑定在c1g(n)和c2g(n)c1和c2的一些值,即f是渐近的增长率等于g:g可以下界和上界的f。这直接意味着f可以下界和上界的g。因此,

f(x) = Θ(g(x)) iff g(x) = Θ(f(x))

f(x) = (g(x)) iff g(x) = (f(x))

Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,

同样,显示f(n)=Θ(g(n)),这足以显示g的上界f(f(n)= O(g(n)))和f是一个下界的g(即f(n)=Ω(g(n))一样的g(n)= O(f(n)))。简洁,

f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))

f(x)=Θ(g(x))敌我识别f(x)= O(g(x))和g(x)= O(f(x))


There are also little-oh and little-omega (ω) notations representing loose upper and loose lower bounds of a function.

也有little-oh和little-omega()符号表示函数的松散的上下限。

To summarize:

总结:

f(x) = O(g(x)) (big-oh) means that the growth rate of f(x) is asymptotically less than or equal to to the growth rate of g(x).

f(x) = O(g(x))(大-oh)表示f(x)的增长率渐近小于或等于g(x)的增长率。

f(x) = Ω(g(x)) (big-omega) means that the growth rate of f(x) is asymptotically greater than or equal to the growth rate of g(x)

f(x)=Ω(g(x))(ω)意味着f(x)是渐近的增长率大于或等于g(x)的增长率

f(x) = o(g(x)) (little-oh) means that the growth rate of f(x) is asymptotically less than the growth rate of g(x).

f(x) = o(g(x)) (little-oh)表示f(x)的增长率渐近小于g(x)的增长率。

f(x) = ω(g(x)) (little-omega) means that the growth rate of f(x) is asymptotically greater than the growth rate of g(x)

f(x)=ω(g(x))(ω)意味着f(x)是渐近的增长率大于g(x)的增长率

f(x) = Θ(g(x)) (theta) means that the growth rate of f(x) is asymptotically equal to the growth rate of g(x)

f(x) = (g(x))表示f(x)的增长率渐近等于g(x)的增长率

For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.

对于更详细的讨论,您可以阅读Wikipedia上的定义,或者参考Cormen等人的经典教科书。

#2


284  

There's a simple way (a trick, I guess) to remember which notation means what.

有一个简单的方法(我猜是一个技巧)来记住哪个符号代表什么。

All of the Big-O notations can be considered to have a bar.

所有的大o符号都可以被认为有一个条形。

When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.

当看着Ω,酒吧是在底部,因此它是一个渐近下界。

When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.

当你看一看的时候,酒吧显然是在中间。所以它是一个(渐近)紧密的边界。

When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.

当书写的时候,你通常在顶部完成,然后画一个弯角。因此O(n)是函数的上界。说句公道话,这个词在大多数字体中都没有作用,但它是名字的原作。

#3


51  

one is Big "O"

一个是大“O”

one is Big Theta

一个是大θ

http://en.wikipedia.org/wiki/Big_O_notation

http://en.wikipedia.org/wiki/Big_O_notation

Big O means your algorithm will execute in no more steps than in given expression(n^2)

Big O意味着你的算法不会比给定的表达式执行更多的步骤(n 2)

Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)

大ω意味着你的算法将执行更少的步骤比在给定的表达式(n ^ 2)

When both condition are true for the same expression, you can use the big theta notation....

当两个条件适用于相同的表达式,可以使用大θ符号....

#4


31  

Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:

我将给出一个简单的例子,而不是提供一个理论上的定义。

Assume the run time of f(i) is O(1). Below is a code fragment whose asymptotic runtime is Θ(n). It always calls the function f(...) n times. Both the lower and the upper bound is n.

假设f(i)的运行时间为O(1)。下面是代码片段的渐近运行时Θ(n)。它总是调用函数f(…)n次。下界和下界都是n。

for(int i=0; i<n; i++){
    f(i);
}

The second code fragment below has the asymptotic runtime of O(n). It calls the function f(...) at most n times. The upper bound is n, but the lower bound could be Ω(1) or Ω(log(n)), depending on what happens inside f2(i).

下面的第二个代码片段具有O(n)的渐近运行时。它在大多数n次调用函数f(…)上界是n,但下界可以Ω(1)或Ω(log(n)),这取决于发生在f2(我)。

for(int i=0; i<n; i++){
    if( f2(i) ) break;
    f(i);
}

#5


10  

A chart could make the previous answers easier to understand:

图表可以使之前的答案更容易理解:

Θ-Notation - Same order | O-Notation - Upper bound

之间的区别是什么Θ(n)和O(n)?之间的区别是什么Θ(n)和O(n)?

In English,

在英语中,

On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say g(n) is the Theta of f(n).

在左边,注意有一个上界和一个下界,它们都是同一个数量级(即g(n))。忽略常数,如果上界和下界有相同的数量级,我们可以有效地说g(n)是f(n)的值。

Starting with the right, the simpler example, it is saying the upper bound (i.e. big O) of f(n) is g(n). Note that g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).

从右边开始,最简单的例子是,f(n)的上界(即大O)是g(n)注意,g(n)仅仅是数量级,忽略常数c(正如所有大O符号所做的一样)。

#6


7  

Think of saying Theta = afunction as a shortcut way of saying Big-O = afunction AND Omega = afunction

想说“=”,这是“大o =”和“=”的一种快捷方式。

When the big O of a function and Omega of the function are the same, Theta is a shorthand way to refer to that special situation.

当函数的大O和函数的值相等时,是表示特殊情况的一种简写方式。

Thus, if you say Theta = some expression, then it is still correct to say O = some expression, and Omega = some expression. The only difference is that saying Theta = some expression contains more information.

因此,如果你说=某个表达式,那么说O =某个表达式仍然是正确的,并且=某个表达式。唯一的区别是,表示某个表达式包含了更多的信息。


Rough analogy:

粗糙的类比:

O says "that animal has less than or equal to 5 legs." Omega says "that animal has more than or equal to 5 lets."

O说“那只动物的腿要小于或等于5条腿。”欧米茄说:“那只动物有超过或等于5个。”

Theta is like saying "that animal has 5 legs".

就像说“那个动物有五条腿”。

In other words, if an animal has 5 legs (Theta), then both the following statements are true:

换句话说,如果一个动物有5条腿(),那么下面的两条语句都是正确的:

  1. the animal has 5 or less legs. (O)
  2. 这种动物有5条或更少的腿。(O)
  3. the animal has 5 or more legs. (Omega)
  4. 这种动物有5条或更多的腿。(ω)

Just keep in mind, the expressions aren't necessarily specific numbers, but functions of varying orders of magnitude ( log(n), n, n^2, etc ).

只要记住,表达式不一定是具体的数字,但功能不同的数量级(log(n),n,n ^ 2,等等)。

#7


5  

f(n) belongs to O(n) if exists positive k as f(n)<=k*n

f(n)属于O(n)如果存在正k为f(n)<=k*n。

f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n

f(n)属于Θ(n)如果存在正k1,k2 k1 * n < = f(n)< = k2 * n

Wikipedia article on Big O Notation

*上关于大O符号的文章。

#8


3  

Conclusion: we regard big O, big θ and big Ω as the same thing.

结论:我们认为大啊,大θ和大Ω一样。

Why? I will tell the reason below:

为什么?我将告诉你下面的理由:

Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.

首先,我要澄清一个错误的声明中,有些人认为我们只是关心最坏时间复杂度,所以我们总是用大O代替大θ。我要说这个人在胡扯。上界和下界用来描述一个函数,而不是用来描述时间复杂度。最糟糕的时间函数有上下界;最好的时间函数也有它的上界和下界。

In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:

为了解释清楚之间的关系大O和θ,我将解释之间的关系大O和小阿。从这个定义,我们可以很容易地知道小阿的一个子集大o .例如:

T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!

T(n)= n ^ 2 + n,我们可以说T(n)= O(n ^ 2)、T(n)= O(n ^ 3)、T(n)= O(n ^ 4)。但对于小啊,T(n)= o(n ^ 2)不符合定义的小o。所以就T(n)= o(n ^ 3)、T(n)= o(n ^ 4)是正确的为小o。冗余T(n)= o(n ^ 2)是什么?这是大θ!

Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.

一般来说,大O是O(n 2),很难说T(n)=O(n 3), T(n)=O(n 4)。为什么?因为我们认为大O大θ下意识的。

Similarly, we also regard big Ω as big θ subconsciously.

同样的,我们也认为大Ωθ下意识的一样大。

In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.

在一个词里,big O, big和big的意思和定义不一样,但是它们在我们的嘴巴和大脑里是一样的。

#9


2  

Using limits

Let's consider f(n) > 0 and g(n) > 0 for all n. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)) instead of the absolute value (|f(n)|).

让我们考虑f(n) > 0和g(n) > 0对于所有n,考虑这个是可以的,因为最快的实数算法至少有一个操作,并且在开始后完成它的执行。这将简化计算,因为我们可以使用值(f(n))而不是绝对值(|f(n)|)。

  1. f(n) = O(g(n))

    General:

    一般:

              f(n)     
    0 ≤ lim ──────── < ∞
        n➜∞   g(n)
    

    For g(n) = n:

    g(n)= n:

              f(n)     
    0 ≤ lim ──────── < ∞
        n➜∞    n
    

    Examples:

    例子:

        Expression               Value of the limit
    ------------------------------------------------
    n        = O(n)                      1
    1/2*n    = O(n)                     1/2
    2*n      = O(n)                      2
    n+log(n) = O(n)                      1
    n        = O(n*log(n))               0
    n        = O(n²)                     0
    n        = O(nⁿ)                     0
    

    Counterexamples:

    反例:

        Expression                Value of the limit
    -------------------------------------------------
    n        ≠ O(log(n))                 ∞
    1/2*n    ≠ O(sqrt(n))                ∞
    2*n      ≠ O(1)                      ∞
    n+log(n) ≠ O(log(n))                 ∞
    
  2. f(n) = Θ(g(n))

    General:

    一般:

              f(n)     
    0 < lim ──────── < ∞
        n➜∞   g(n)
    

    For g(n) = n:

    g(n)= n:

              f(n)     
    0 < lim ──────── < ∞
        n➜∞    n
    

    Examples:

    例子:

        Expression               Value of the limit
    ------------------------------------------------
    n        = Θ(n)                      1
    1/2*n    = Θ(n)                     1/2
    2*n      = Θ(n)                      2
    n+log(n) = Θ(n)                      1
    

    Counterexamples:

    反例:

        Expression                Value of the limit
    -------------------------------------------------
    n        ≠ Θ(log(n))                 ∞
    1/2*n    ≠ Θ(sqrt(n))                ∞
    2*n      ≠ Θ(1)                      ∞
    n+log(n) ≠ Θ(log(n))                 ∞
    n        ≠ Θ(n*log(n))               0
    n        ≠ Θ(n²)                     0
    n        ≠ Θ(nⁿ)                     0
    

#1


529  

Short explanation:

If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).

如果一个算法的Θ(g(n)),这意味着该算法的运行时间n(输入大小)变大正比于g(n)。

If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).

如果一个算法是O(g(n)),则意味着算法的运行时间随着n的增大而增大,与g(n)成正比。

Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.

正常情况下,即使人们谈论O(g(n))实际上是指(g(n)),但从技术上讲,这是有区别的。


More technically:

O(n) represents upper bound. Θ(n) means tight bound. Ω(n) represents lower bound.

O(n)代表上界。Θ(n)意味着紧密绑定。Ω(n)代表下界。

f(x) = Θ(g(x)) iff f(x) = O(g(x)) and f(x) = Ω(g(x))

f(x)=Θ(g(x))敌我识别f(x)= O(g(x))和f(x)=Ω(g(x))

Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).

基本上,当我们说一个算法是O(n)的时候,它也是O(n2), O(n1000000), O(2n),…但Θ(n)的算法不是Θ(n2)。

In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,

事实上,自从f(n)=Θ(g(n))意味着足够大的值n,f(n)可以绑定在c1g(n)和c2g(n)c1和c2的一些值,即f是渐近的增长率等于g:g可以下界和上界的f。这直接意味着f可以下界和上界的g。因此,

f(x) = Θ(g(x)) iff g(x) = Θ(f(x))

f(x) = (g(x)) iff g(x) = (f(x))

Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,

同样,显示f(n)=Θ(g(n)),这足以显示g的上界f(f(n)= O(g(n)))和f是一个下界的g(即f(n)=Ω(g(n))一样的g(n)= O(f(n)))。简洁,

f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))

f(x)=Θ(g(x))敌我识别f(x)= O(g(x))和g(x)= O(f(x))


There are also little-oh and little-omega (ω) notations representing loose upper and loose lower bounds of a function.

也有little-oh和little-omega()符号表示函数的松散的上下限。

To summarize:

总结:

f(x) = O(g(x)) (big-oh) means that the growth rate of f(x) is asymptotically less than or equal to to the growth rate of g(x).

f(x) = O(g(x))(大-oh)表示f(x)的增长率渐近小于或等于g(x)的增长率。

f(x) = Ω(g(x)) (big-omega) means that the growth rate of f(x) is asymptotically greater than or equal to the growth rate of g(x)

f(x)=Ω(g(x))(ω)意味着f(x)是渐近的增长率大于或等于g(x)的增长率

f(x) = o(g(x)) (little-oh) means that the growth rate of f(x) is asymptotically less than the growth rate of g(x).

f(x) = o(g(x)) (little-oh)表示f(x)的增长率渐近小于g(x)的增长率。

f(x) = ω(g(x)) (little-omega) means that the growth rate of f(x) is asymptotically greater than the growth rate of g(x)

f(x)=ω(g(x))(ω)意味着f(x)是渐近的增长率大于g(x)的增长率

f(x) = Θ(g(x)) (theta) means that the growth rate of f(x) is asymptotically equal to the growth rate of g(x)

f(x) = (g(x))表示f(x)的增长率渐近等于g(x)的增长率

For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.

对于更详细的讨论,您可以阅读Wikipedia上的定义,或者参考Cormen等人的经典教科书。

#2


284  

There's a simple way (a trick, I guess) to remember which notation means what.

有一个简单的方法(我猜是一个技巧)来记住哪个符号代表什么。

All of the Big-O notations can be considered to have a bar.

所有的大o符号都可以被认为有一个条形。

When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.

当看着Ω,酒吧是在底部,因此它是一个渐近下界。

When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.

当你看一看的时候,酒吧显然是在中间。所以它是一个(渐近)紧密的边界。

When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.

当书写的时候,你通常在顶部完成,然后画一个弯角。因此O(n)是函数的上界。说句公道话,这个词在大多数字体中都没有作用,但它是名字的原作。

#3


51  

one is Big "O"

一个是大“O”

one is Big Theta

一个是大θ

http://en.wikipedia.org/wiki/Big_O_notation

http://en.wikipedia.org/wiki/Big_O_notation

Big O means your algorithm will execute in no more steps than in given expression(n^2)

Big O意味着你的算法不会比给定的表达式执行更多的步骤(n 2)

Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)

大ω意味着你的算法将执行更少的步骤比在给定的表达式(n ^ 2)

When both condition are true for the same expression, you can use the big theta notation....

当两个条件适用于相同的表达式,可以使用大θ符号....

#4


31  

Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:

我将给出一个简单的例子,而不是提供一个理论上的定义。

Assume the run time of f(i) is O(1). Below is a code fragment whose asymptotic runtime is Θ(n). It always calls the function f(...) n times. Both the lower and the upper bound is n.

假设f(i)的运行时间为O(1)。下面是代码片段的渐近运行时Θ(n)。它总是调用函数f(…)n次。下界和下界都是n。

for(int i=0; i<n; i++){
    f(i);
}

The second code fragment below has the asymptotic runtime of O(n). It calls the function f(...) at most n times. The upper bound is n, but the lower bound could be Ω(1) or Ω(log(n)), depending on what happens inside f2(i).

下面的第二个代码片段具有O(n)的渐近运行时。它在大多数n次调用函数f(…)上界是n,但下界可以Ω(1)或Ω(log(n)),这取决于发生在f2(我)。

for(int i=0; i<n; i++){
    if( f2(i) ) break;
    f(i);
}

#5


10  

A chart could make the previous answers easier to understand:

图表可以使之前的答案更容易理解:

Θ-Notation - Same order | O-Notation - Upper bound

之间的区别是什么Θ(n)和O(n)?之间的区别是什么Θ(n)和O(n)?

In English,

在英语中,

On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say g(n) is the Theta of f(n).

在左边,注意有一个上界和一个下界,它们都是同一个数量级(即g(n))。忽略常数,如果上界和下界有相同的数量级,我们可以有效地说g(n)是f(n)的值。

Starting with the right, the simpler example, it is saying the upper bound (i.e. big O) of f(n) is g(n). Note that g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).

从右边开始,最简单的例子是,f(n)的上界(即大O)是g(n)注意,g(n)仅仅是数量级,忽略常数c(正如所有大O符号所做的一样)。

#6


7  

Think of saying Theta = afunction as a shortcut way of saying Big-O = afunction AND Omega = afunction

想说“=”,这是“大o =”和“=”的一种快捷方式。

When the big O of a function and Omega of the function are the same, Theta is a shorthand way to refer to that special situation.

当函数的大O和函数的值相等时,是表示特殊情况的一种简写方式。

Thus, if you say Theta = some expression, then it is still correct to say O = some expression, and Omega = some expression. The only difference is that saying Theta = some expression contains more information.

因此,如果你说=某个表达式,那么说O =某个表达式仍然是正确的,并且=某个表达式。唯一的区别是,表示某个表达式包含了更多的信息。


Rough analogy:

粗糙的类比:

O says "that animal has less than or equal to 5 legs." Omega says "that animal has more than or equal to 5 lets."

O说“那只动物的腿要小于或等于5条腿。”欧米茄说:“那只动物有超过或等于5个。”

Theta is like saying "that animal has 5 legs".

就像说“那个动物有五条腿”。

In other words, if an animal has 5 legs (Theta), then both the following statements are true:

换句话说,如果一个动物有5条腿(),那么下面的两条语句都是正确的:

  1. the animal has 5 or less legs. (O)
  2. 这种动物有5条或更少的腿。(O)
  3. the animal has 5 or more legs. (Omega)
  4. 这种动物有5条或更多的腿。(ω)

Just keep in mind, the expressions aren't necessarily specific numbers, but functions of varying orders of magnitude ( log(n), n, n^2, etc ).

只要记住,表达式不一定是具体的数字,但功能不同的数量级(log(n),n,n ^ 2,等等)。

#7


5  

f(n) belongs to O(n) if exists positive k as f(n)<=k*n

f(n)属于O(n)如果存在正k为f(n)<=k*n。

f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n

f(n)属于Θ(n)如果存在正k1,k2 k1 * n < = f(n)< = k2 * n

Wikipedia article on Big O Notation

*上关于大O符号的文章。

#8


3  

Conclusion: we regard big O, big θ and big Ω as the same thing.

结论:我们认为大啊,大θ和大Ω一样。

Why? I will tell the reason below:

为什么?我将告诉你下面的理由:

Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.

首先,我要澄清一个错误的声明中,有些人认为我们只是关心最坏时间复杂度,所以我们总是用大O代替大θ。我要说这个人在胡扯。上界和下界用来描述一个函数,而不是用来描述时间复杂度。最糟糕的时间函数有上下界;最好的时间函数也有它的上界和下界。

In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:

为了解释清楚之间的关系大O和θ,我将解释之间的关系大O和小阿。从这个定义,我们可以很容易地知道小阿的一个子集大o .例如:

T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!

T(n)= n ^ 2 + n,我们可以说T(n)= O(n ^ 2)、T(n)= O(n ^ 3)、T(n)= O(n ^ 4)。但对于小啊,T(n)= o(n ^ 2)不符合定义的小o。所以就T(n)= o(n ^ 3)、T(n)= o(n ^ 4)是正确的为小o。冗余T(n)= o(n ^ 2)是什么?这是大θ!

Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.

一般来说,大O是O(n 2),很难说T(n)=O(n 3), T(n)=O(n 4)。为什么?因为我们认为大O大θ下意识的。

Similarly, we also regard big Ω as big θ subconsciously.

同样的,我们也认为大Ωθ下意识的一样大。

In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.

在一个词里,big O, big和big的意思和定义不一样,但是它们在我们的嘴巴和大脑里是一样的。

#9


2  

Using limits

Let's consider f(n) > 0 and g(n) > 0 for all n. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)) instead of the absolute value (|f(n)|).

让我们考虑f(n) > 0和g(n) > 0对于所有n,考虑这个是可以的,因为最快的实数算法至少有一个操作,并且在开始后完成它的执行。这将简化计算,因为我们可以使用值(f(n))而不是绝对值(|f(n)|)。

  1. f(n) = O(g(n))

    General:

    一般:

              f(n)     
    0 ≤ lim ──────── < ∞
        n➜∞   g(n)
    

    For g(n) = n:

    g(n)= n:

              f(n)     
    0 ≤ lim ──────── < ∞
        n➜∞    n
    

    Examples:

    例子:

        Expression               Value of the limit
    ------------------------------------------------
    n        = O(n)                      1
    1/2*n    = O(n)                     1/2
    2*n      = O(n)                      2
    n+log(n) = O(n)                      1
    n        = O(n*log(n))               0
    n        = O(n²)                     0
    n        = O(nⁿ)                     0
    

    Counterexamples:

    反例:

        Expression                Value of the limit
    -------------------------------------------------
    n        ≠ O(log(n))                 ∞
    1/2*n    ≠ O(sqrt(n))                ∞
    2*n      ≠ O(1)                      ∞
    n+log(n) ≠ O(log(n))                 ∞
    
  2. f(n) = Θ(g(n))

    General:

    一般:

              f(n)     
    0 < lim ──────── < ∞
        n➜∞   g(n)
    

    For g(n) = n:

    g(n)= n:

              f(n)     
    0 < lim ──────── < ∞
        n➜∞    n
    

    Examples:

    例子:

        Expression               Value of the limit
    ------------------------------------------------
    n        = Θ(n)                      1
    1/2*n    = Θ(n)                     1/2
    2*n      = Θ(n)                      2
    n+log(n) = Θ(n)                      1
    

    Counterexamples:

    反例:

        Expression                Value of the limit
    -------------------------------------------------
    n        ≠ Θ(log(n))                 ∞
    1/2*n    ≠ Θ(sqrt(n))                ∞
    2*n      ≠ Θ(1)                      ∞
    n+log(n) ≠ Θ(log(n))                 ∞
    n        ≠ Θ(n*log(n))               0
    n        ≠ Θ(n²)                     0
    n        ≠ Θ(nⁿ)                     0