1 00:00:06,320 --> 00:00:12,440 When we look at an algorithm, we want to try to optimize our time and space complexity. 2 00:00:12,920 --> 00:00:19,190 Time is how long it takes the algorithm to run, and space refers to how much memory it takes to run. 3 00:00:19,730 --> 00:00:28,520 Bingo notation is a standard mathematical notation that shows how efficient an algorithm is in the worst 4 00:00:28,520 --> 00:00:32,840 case scenario relative to its input size. 5 00:00:33,410 --> 00:00:38,720 And there are two ways that we can evaluate the big o of an algorithm. 6 00:00:38,930 --> 00:00:45,290 The first one, which is what we're going to talk about in this lecture, is experimental. 7 00:00:49,130 --> 00:00:54,500 The second one that we can talk about is going to be theoretical. 8 00:00:59,850 --> 00:01:05,550 So experimental is honestly kind of what it sounds like, right? 9 00:01:05,730 --> 00:01:16,050 So as we add more elements, you can envision a graph and we'll say this is zero and we'll say 110, 10 00:01:16,440 --> 00:01:22,170 100, 1000, and then so on and so forth. 11 00:01:22,650 --> 00:01:29,730 And as and this would be in for our input size. 12 00:01:32,140 --> 00:01:36,220 And then on this axis, we would have time. 13 00:01:37,120 --> 00:01:45,130 And you can imagine as the input size goes up, we see that we have a linear line. 14 00:01:45,850 --> 00:01:57,370 So with it being linear, we can represent this as DT of the input size is equal to A, N plus B, So 15 00:01:57,370 --> 00:02:04,300 as our input size goes up, the time is linear according to the input size. 16 00:02:05,020 --> 00:02:13,510 So now when we evaluate this, right, so when we're evaluating our algorithm, there's two things that 17 00:02:13,510 --> 00:02:14,200 we want. 18 00:02:14,500 --> 00:02:17,170 We want less time. 19 00:02:18,430 --> 00:02:20,380 And less. 20 00:02:21,760 --> 00:02:22,630 Memory. 21 00:02:23,470 --> 00:02:27,180 These are ideal for an algorithm. 22 00:02:27,190 --> 00:02:33,520 And more time and more memory. 23 00:02:37,130 --> 00:02:39,560 Are not ideal. 24 00:02:42,200 --> 00:02:50,420 So again, we can look at another graph here where we have our time and our input size, and we can 25 00:02:50,420 --> 00:02:55,070 also have a graph that looks something like this. 26 00:02:56,120 --> 00:02:59,110 And this would be quadratic. 27 00:03:00,620 --> 00:03:10,490 And so we can say for this to represent it, t of n is equal to A in squared plus B and plus C. 28 00:03:11,360 --> 00:03:14,300 So we see we have two here. 29 00:03:14,540 --> 00:03:18,440 But then up here this is one. 30 00:03:18,980 --> 00:03:28,580 So our big notation for this would be O of n and then our big notation for this would be o of n squared. 31 00:03:28,940 --> 00:03:31,790 So we don't care about coefficients. 32 00:03:31,790 --> 00:03:36,290 So technically we can get rid of these because as I said, we don't care about them. 33 00:03:36,740 --> 00:03:40,940 We care about the highest power, which in this case is two. 34 00:03:41,270 --> 00:03:43,610 And then in this case it's one. 35 00:03:43,790 --> 00:03:47,600 And that is how we're able to represent our big O notation. 36 00:03:48,320 --> 00:03:51,470 And so if you wanted to do experimental 37 00:03:53,420 --> 00:04:00,440 evaluation, then you would create a benchmark for your program and you would run it and you would have 38 00:04:00,680 --> 00:04:06,620 input sizes of one 10,000, 1000, 10,000, and maybe even 100,000. 39 00:04:08,240 --> 00:04:15,770 But in reality, sometimes, like if you if you ran 100,000 as your input size, depending on what your 40 00:04:15,770 --> 00:04:23,720 algorithm is doing, that could take a substantial amount of time and you might not have that time to 41 00:04:24,020 --> 00:04:25,050 to waste. 42 00:04:25,070 --> 00:04:30,860 And that is where theoretical evaluation comes in. 43 00:04:31,460 --> 00:04:36,950 And we will talk about theoretical big notation in the next lecture.