1 00:00:01,010 --> 00:00:06,190 Real quick note if you skipped over the entire react application chances are you downloaded the completed 2 00:00:06,200 --> 00:00:07,670 code from the last lecture. 3 00:00:07,730 --> 00:00:12,320 If you did so make sure you read the text note inside that lecture but you need to make sure that you 4 00:00:12,320 --> 00:00:16,220 overwrite your entire current project with all the code from that zip file. 5 00:00:16,220 --> 00:00:21,390 We made two small changes are really one small change in two locations to the post service. 6 00:00:21,410 --> 00:00:27,440 Andy comment service you need to make sure that you get those updates after you update those files. 7 00:00:27,500 --> 00:00:31,130 Make sure you rerun npm install in the post and Commons directories. 8 00:00:31,190 --> 00:00:33,230 OK so let's get back to it. 9 00:00:33,280 --> 00:00:38,150 The last video we finished up with the react application and we saw that while we were able to see all 10 00:00:38,150 --> 00:00:43,100 of our different posts and comments associate with them there was a downside here and the downside is 11 00:00:43,100 --> 00:00:48,980 that for every single post we load up we are making one request to our common service to get all the 12 00:00:48,980 --> 00:00:51,430 comments associated with that post. 13 00:00:51,500 --> 00:00:56,030 In my case I've got three posts right here and I ended up having to make three separate requests to 14 00:00:56,030 --> 00:01:01,430 our common service to get the comments for each one so in other words we're in this kind of scenario 15 00:01:01,430 --> 00:01:02,650 right now. 16 00:01:02,770 --> 00:01:09,360 We are making a GET request to some post end point that's giving us back an array or less of posts and 17 00:01:09,360 --> 00:01:13,290 then for every one of those we have to make a follow up request. 18 00:01:13,290 --> 00:01:16,380 This is incredibly incredibly inefficient. 19 00:01:16,590 --> 00:01:20,520 So I think that would really be worth our time to figure out how to maybe condense all this stuff down 20 00:01:20,520 --> 00:01:22,500 to just one request. 21 00:01:22,500 --> 00:01:28,440 I want to yield to make one request one single request and get all of IRC posts and all the associated 22 00:01:28,440 --> 00:01:30,750 comments for those posts as well. 23 00:01:30,780 --> 00:01:33,280 That's what I want to try to do now. 24 00:01:33,300 --> 00:01:38,820 If you're trying to do this in a sort of monolith architecture this would be really easy and straightforward 25 00:01:38,820 --> 00:01:44,400 to do so if we were building a monolith right now we could maybe say that if we ever make a get request 26 00:01:44,460 --> 00:01:49,890 to slash posts and add on a query string that says something like comments equals true maybe that would 27 00:01:49,890 --> 00:01:54,840 be some sign to our model a server that we want to get a list of our posts with all of the relevant 28 00:01:54,840 --> 00:02:01,800 comments embedded in those posts as well let's be super straightforward super easy with a moderate approach. 29 00:02:01,850 --> 00:02:04,170 But of course we're not using a monolith. 30 00:02:04,220 --> 00:02:06,970 How are we going to solve this with what we have available. 31 00:02:07,070 --> 00:02:10,070 How we can solve this with micro services. 32 00:02:10,200 --> 00:02:14,970 Right now we only have the ability to make a request to either the Postal Service or the common service. 33 00:02:15,020 --> 00:02:20,280 So we need to figure out some way of solving this problem well to solve this. 34 00:02:20,280 --> 00:02:24,220 This really goes back to something we discussed a little bit ago the different styles of communication 35 00:02:24,220 --> 00:02:30,190 between services are going to take a look at two possible solutions here and the pros and cons about 36 00:02:30,220 --> 00:02:35,230 these solutions are going to be very similar to those methods of async and sync communication and we 37 00:02:35,230 --> 00:02:36,760 discussed a little bit ago. 38 00:02:36,940 --> 00:02:42,250 So a first began with a solution based on synchronous communication with this tile solution. 39 00:02:42,250 --> 00:02:48,340 We might say that we are going to continue to make a get request to our post service and then maybe 40 00:02:48,340 --> 00:02:51,290 to make sure that we get all the relevant comments embedded. 41 00:02:51,340 --> 00:02:57,670 We could add some code to our post service to reach out automatically to our comment service and say 42 00:02:57,700 --> 00:03:04,130 hey give me all the comments you have associated with these post ideas the common service would then 43 00:03:04,130 --> 00:03:06,320 reply with all the relevant comments. 44 00:03:06,560 --> 00:03:11,060 Then the post service would take those comments assemble them all together with the relevant posts and 45 00:03:11,060 --> 00:03:14,610 then send the entire bundle back over to the browser. 46 00:03:14,640 --> 00:03:19,330 So again this is one possible solution that relies upon synchronous communication. 47 00:03:19,400 --> 00:03:23,960 The downsides to this approach are identical to the downsides we discussed earlier when we are first 48 00:03:23,960 --> 00:03:26,470 talking about synchronous communication. 49 00:03:26,510 --> 00:03:31,850 So while this approach is conceptually pretty darn easy to understand well there's a lot of downsides 50 00:03:31,850 --> 00:03:32,960 to this approach as well. 51 00:03:34,340 --> 00:03:37,840 First off it introduces a dependency between these services. 52 00:03:37,850 --> 00:03:44,130 This is another thing we have to track and understand inside of our application if we ever have the 53 00:03:44,160 --> 00:03:46,260 comment service go down for any reason. 54 00:03:46,260 --> 00:03:50,790 So if this thing just mysteriously disappears all of a sudden our post service is probably not going 55 00:03:50,790 --> 00:03:58,000 to work correctly either if that request from the Postal Service over to the common service fails then 56 00:03:58,000 --> 00:04:00,080 the overall request is going to fail as well. 57 00:04:00,130 --> 00:04:04,160 So we will show neither posts nor comments. 58 00:04:04,170 --> 00:04:08,880 We also are introducing another kind of round trip request here between one service and another. 59 00:04:09,030 --> 00:04:13,560 So if for any reason that request is slower than the overall request coming from the browser is going 60 00:04:13,560 --> 00:04:17,180 to be delayed or the response is going to be delayed as well. 61 00:04:17,230 --> 00:04:20,490 Once again we discussed this back on the synchronous communication stuff. 62 00:04:20,490 --> 00:04:25,500 Right now our application just consists of two separate services but if we started to add in a bunch 63 00:04:25,500 --> 00:04:30,990 more services and for some reason fetching comments from the common service required some calls to other 64 00:04:30,990 --> 00:04:35,880 services we're gonna start to build up this tree or really a web of different requests which really 65 00:04:35,880 --> 00:04:41,890 exaggerates all these earlier problems as well so even though this is conceptually easy for you and 66 00:04:41,890 --> 00:04:42,220 I. 67 00:04:42,220 --> 00:04:47,710 It is perhaps not the best solution from an engineering standpoint now that we have taken a look at 68 00:04:47,710 --> 00:04:49,300 the synchronous communication approach. 69 00:04:49,300 --> 00:04:53,320 Let's pause right here and then take a look at a second possible solution that we could use.