Unverified Commit 2ac6512f authored by Agade09's avatar Agade09 Committed by GitHub
Browse files

Fix typos in Twitter and web crawler exercises (#438)

parent 914736a2
...@@ -26,7 +26,7 @@ Without an interviewer to address clarifying questions, we'll define some use ca ...@@ -26,7 +26,7 @@ Without an interviewer to address clarifying questions, we'll define some use ca
#### Out of scope #### Out of scope
* **Service** pushes tweets to the Twitter Firehose and other streams * **Service** pushes tweets to the Twitter Firehose and other streams
* **Service** strips out tweets based on user's visibility settings * **Service** strips out tweets based on users' visibility settings
* Hide @reply if the user is not also following the person being replied to * Hide @reply if the user is not also following the person being replied to
* Respect 'hide retweets' setting * Respect 'hide retweets' setting
* Analytics * Analytics
...@@ -129,7 +129,7 @@ If our **Memory Cache** is Redis, we could use a native Redis list with the foll ...@@ -129,7 +129,7 @@ If our **Memory Cache** is Redis, we could use a native Redis list with the foll
| tweet_id user_id meta | tweet_id user_id meta | tweet_id user_id meta | | tweet_id user_id meta | tweet_id user_id meta | tweet_id user_id meta |
``` ```
The new tweet would be placed in the **Memory Cache**, which populates user's home timeline (activity from people the user is following). The new tweet would be placed in the **Memory Cache**, which populates the user's home timeline (activity from people the user is following).
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest): We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
......
...@@ -77,7 +77,7 @@ Handy conversion guide: ...@@ -77,7 +77,7 @@ Handy conversion guide:
### Use case: Service crawls a list of urls ### Use case: Service crawls a list of urls
We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc.
We'll use a table `crawled_links` to store processed links and their page signatures. We'll use a table `crawled_links` to store processed links and their page signatures.
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment