1 00:00:02,730 --> 00:00:08,260 So let's try this high precision double and for that, I'll first of all 2 00:00:08,400 --> 00:00:16,980 delete that entry I have in my science collection here and now I'll again insert one document into my 3 00:00:16,980 --> 00:00:24,240 science collection and that one document will be a high precision double or will be using high precision 4 00:00:24,240 --> 00:00:25,080 doubles. 5 00:00:25,140 --> 00:00:26,460 So I still have a 6 00:00:26,670 --> 00:00:32,310 but now I will use number decimal which is the mongodb shell constructor for this value. 7 00:00:32,310 --> 00:00:39,450 Now again if you're using a driver for Python, for Node, for C++, for Java, you will find help in the driver 8 00:00:39,450 --> 00:00:40,240 docs, 9 00:00:40,320 --> 00:00:47,590 there you will find the constructor you use to create this 128 bit decimal. In the shell, 10 00:00:47,670 --> 00:00:49,900 the constructor is named number decimal 11 00:00:50,040 --> 00:00:53,480 and then again we use quotation marks to pass the value. 12 00:00:53,490 --> 00:00:54,520 You don't have to use them, 13 00:00:54,540 --> 00:00:56,830 you could pass something like this 14 00:00:57,090 --> 00:01:01,440 but then you face that issue of imprecision because that value is not precise, 15 00:01:01,470 --> 00:01:05,910 so at the point of time mongodb tries to store it precisely, it still has the 16 00:01:06,030 --> 00:01:08,100 well imprecision baked in, 17 00:01:08,100 --> 00:01:10,310 so we should pass it as a string 18 00:01:10,620 --> 00:01:12,760 and the same of course for B. 19 00:01:12,990 --> 00:01:18,690 So there, I also have my number decimal, 0.1 stored like this. 20 00:01:18,690 --> 00:01:29,930 Now if we query our database here, we see it's stored as such decimal, 128 bit decimal in the database. 21 00:01:30,300 --> 00:01:36,870 Now let's repeat that aggregation command from before where we project into that result field by subtracting 22 00:01:36,870 --> 00:01:38,070 B from A. 23 00:01:38,490 --> 00:01:45,530 If I hit enter here, we see now the result is actually an exact number decimal, 24 00:01:45,540 --> 00:01:49,470 so now we indeed have the exact value we need 25 00:01:49,470 --> 00:01:56,070 and just as with long and integer, we can use the number decimal for all kinds of calculations like subtract 26 00:01:56,070 --> 00:01:56,650 here, 27 00:01:56,670 --> 00:02:01,180 we could also sort by it, we can increment it in an update operation, 28 00:02:01,380 --> 00:02:07,450 you have to be aware though just as with the long value that if you do update for example, 29 00:02:07,620 --> 00:02:12,260 so we update our one value here and you do that with increment, 30 00:02:12,660 --> 00:02:19,120 if you do increment a by .1 let's say, that will work 31 00:02:19,190 --> 00:02:25,290 but if you have a look into your database, you of course now face the issue of having an imprecision. 32 00:02:25,370 --> 00:02:28,880 It actually did not convert it to a float 64, 33 00:02:28,880 --> 00:02:36,950 it kept the 128 bit decimal but you could introduce imprecision here if you add a float like this because 34 00:02:37,010 --> 00:02:39,200 that float could be unprecise. 35 00:02:39,230 --> 00:02:46,850 So a better way for updating is of course just as before to use number decimal in this case and pass 36 00:02:46,850 --> 00:02:48,090 that value 37 00:02:48,180 --> 00:02:55,550 and if you do this, you're guaranteed that you don't suddenly introduce some imprecision into your otherwise 38 00:02:55,550 --> 00:02:58,190 precise data on the backend. 39 00:02:58,410 --> 00:03:05,520 But then besides that, you can use it in your normal sorting, querying, just like a normal number but with 40 00:03:05,520 --> 00:03:13,280 more precision, of course that precision comes at a price, that's also worth noting. If I quickly add a 41 00:03:13,280 --> 00:03:18,010 new collection, nums and I insert one value here, 42 00:03:18,290 --> 00:03:19,380 let's say a 43 00:03:19,460 --> 00:03:21,520 which is .1, 44 00:03:21,590 --> 00:03:27,650 if I now have a look at the stats of my nums collection with only that one document with that one 45 00:03:27,650 --> 00:03:31,720 field in there, we see that is the size that happens to be stored 46 00:03:32,060 --> 00:03:33,730 and if I now remove that, 47 00:03:33,740 --> 00:03:41,960 so I delete many in nums, all elements that are in there and I now insert A again 48 00:03:41,960 --> 00:03:47,380 but now I use number decimal for this value, 49 00:03:47,580 --> 00:03:51,680 then you will see that if I have a look at the stats again, it's actually bigger. 50 00:03:52,080 --> 00:03:58,400 So here you see it has a bigger size now because more space is reserved for this high precision decimal. 51 00:03:58,470 --> 00:03:59,780 So that is a downsize, 52 00:03:59,880 --> 00:04:03,260 so using it for all decimal values might not be optimal 53 00:04:03,330 --> 00:04:09,510 but for cases where you need a high precision, this is your solution which allows you to do high precision 54 00:04:09,510 --> 00:04:11,850 math without losing any precision.