title: 【Reprint】Nginx Concepts I Wish I Knew Years Ago
date: 2021-08-14 09:32:45
comment: false
toc: true
category:
- Share
tags: - Reprint
- nginx
- Concepts
This article is reprinted from: [Translation] Nginx Concepts I Wish I Knew Years Ago
- Original link: Nginx concepts I wish I knew years ago
- Original author: Aemie Jariwala
- Translation source: Juejin Translation Program
- Permanent link to this article: github.com/xitu/gold-m…
- Translator: joyking7
- Proofreader: PassionPenguin, ningzhy3
Nginx is a web server that follows a master-slave architecture and can be used as a reverse proxy, load balancer, mail proxy, and HTTP cache.
Wow! Complex terminology and confusing definitions filled with a lot of perplexing words, right? Don't worry, I can help everyone understand the basic architecture and terminology of Nginx first, and then we will install and create Nginx configurations.
To simplify things, just remember: Nginx is a magical web server.
In simple terms, a web server acts like a middleman. For example, if you want to access dev.to and enter the address https://dev.to
, your browser will find the web server address for https://dev.to
and then direct it to the backend server, which will return the response to the client.
Proxy vs Reverse Proxy#
The basic function of Nginx is proxying, so now we need to understand what a proxy and a reverse proxy are.
Proxy#
Okay, we have one or more clients, an intermediate web server (in this case, we call it a proxy), and a server. The main thing here is that the server does not know which client is making the request. Is it a bit confusing? Let me explain with a diagram.
In this case, clients client1 and client2 send requests request1 and request2 to the server through the proxy server, and the backend server will not know whether request1 was sent by client1 or client2; it will just perform the operation.
Reverse Proxy#
In the simplest terms, a reverse proxy is just the opposite of a proxy. For instance, there is one client, one intermediate web server, and one or more backend servers. Let's continue to explain with a diagram!
In this case, the client sends a request through the web server, and the web server directs the request to any of the many servers using an algorithm, one of which is round-robin scheduling (the cutest one!), and then returns the response to the client through the web server. Therefore, here, the client does not know which backend server it is interacting with.
Load Balancing#
Darn, another new term, but this one is easier to understand because it is a practical application of reverse proxy itself.
Let's first talk about the basic difference. In load balancing, there must be two or more backend servers, but in a reverse proxy setup, this is not necessary; it can even work with just a single backend server.
Let's take a look behind the scenes. If we have a large number of requests from clients, this load balancer will check the status of each backend server and allocate the load of requests, then send the response back to the client more quickly.
Stateful vs Stateless Applications#
Alright everyone, I promise I will get to the Nginx code soon; let’s clarify all the basic concepts first!
Stateful Application#
This application stores an extra variable to save information that applies only to a single server instance.
What I mean is that if the backend server server1 stores some information, it will not be stored on server2, so the interacting client (here referring to Bob) may not get the desired result because it may interact with either server1 or server2. In this case, server1 will allow Bob to view the profile, but server2 will not. Therefore, even though stateful applications prevent many API calls to the database and are faster, they can cause the aforementioned issues across different servers.
Stateless Application#
Now, stateless applications make more API calls to the database, but there are fewer issues when clients interact with different backend servers.
I know you didn't understand what I meant. Simply put, if I send a request from the client to, say, backend server server1 through the web server, it will provide a token for accessing any other requests. The client can use the token and send requests to the web server, which will send the requests and token to any backend server, each of which will return the same expected output.
What is Nginx?#
Nginx is a web server, and so far, I have been using the term web server throughout this blog; honestly, it acts like a middleman.
This diagram is not difficult to understand; it just combines all the concepts I have explained so far. In this diagram, we have three backend servers running on ports 3001, 3002, and 3003, which share a database running on port 5432.
Now, when the client sends a request GET /employees
to https://localhost
(default port 443), Nginx will send this request to any of the backend servers based on an algorithm, the backend server will fetch information from the database, and then send the JSON result back to the Nginx web server, which will then send it back to the client.
If we use an algorithm like round-robin scheduling, Nginx will do this: for example, if client2 sends a request to https://localhost
, then the Nginx server will first send the request to port 3001 and return the response to the client. For another request, Nginx will send the request to port 3002, and so on.
That's a lot of concepts! But by now, you should have a clear understanding of what Nginx is and its related terminology. Now, let's continue to learn about the installation and configuration of Nginx.
Installation Process#
Finally, we have reached this step! If you can understand the Nginx concepts and see the code part, that's awesome!
Okay, honestly, installing Nginx on any operating system only requires one command. I am a Mac OSX user, so I will write the command based on it. But there are similar commands for Ubuntu and Windows and other Linux distributions.
brew install Nginx
With just one command, your system will have Nginx installed! Amazing!
Running So easy!😛#
Run the following command to check if Nginx is running on your system; it's another very simple step.
nginx
# OR
sudo nginx
After running the command, use your favorite browser to visit http://localhost:8080/
, and you will see the following screen!
Basic Configuration and Example#
Alright, we will demonstrate the magic of Nginx through an example. First, create the following directory structure on your local machine:
.
├── nginx-demo
│ ├── content
│ │ ├── first.txt
│ │ ├── index.html
│ │ └── index.md
│ └── main
│ └── index.html
└── temp-nginx
└── outsider
└── index.html
At the same time, write the basic contextual content in the html and md files.
What effect do we want to achieve?#
Here, we have two separate folders nginx-demo
and temp-nginx
, each containing static HTML files. We will focus on running these two folders on a common port and setting the rules we like.
Now back on track. We can modify the nginx.conf
file located at /usr/local/etc/nginx
(translator's note: default installation path) to make any changes to the default Nginx configuration. Additionally, I have Vim on my system, so I will use Vim to make modifications, but you can freely use your chosen editor.
cd /usr/local/etc/nginx
vim nginx.conf
This will open a default Nginx configuration file, but I really don't want to use its default configuration. Therefore, I usually copy this configuration file and then modify the original file. We will do the same here.
cp nginx.conf copy-nginx.conf
rm nginx.conf && vim nginx.conf
Now open an empty file, and we will add our configuration to it.
-
Add a basic configuration. Adding
events {}
is necessary because it is typically used to represent the number of Workers in Nginx architecture. Here we usehttp
to tell Nginx that we will use the 7th layer of the OSI model.
Here, we let Nginx listen on port 5000 and point to the static files in the/nginx-demo/main
folder.http { server { listen 5000; root /path/to/nginx-demo/main/; } } events {}
-
Next, we will add additional rules for the
/content
and/outsider
URLs, where outsider will point to a directory outside the root directory mentioned in the first step.
Herelocation /content
means that regardless of which root directory I define in the subdirectory, the content sub-URL will be appended to the defined root directory. Therefore, when I specify the root directory asroot /path/to/nginx-demo/
, it simply means I am telling Nginx to show me the contents of the static files in the folder athttp://localhost:5000/path/to/nginx-demo/content/
.http { server { listen 5000; root /path/to/nginx-demo/main/; location /content { root /path/to/nginx-demo/; } location /outsider { root /path/temp-nginx/; } } } events {}
Cool! Now Nginx is not only limited to defining the root URL but can also set rules so that I can prevent clients from accessing certain files.
-
We will write an additional rule in the defined main server to block access to any .md files. We can use regular expressions in Nginx, and the rule is defined as follows:
location ~ .md { return 403; }
-
Finally, let's learn about the popular command
proxy_pass
. Now that we understand what a proxy and a reverse proxy are, we will define another backend server running on port 8888, so now we have two backend servers running on ports 5000 and 8888 respectively.
What we want to do is, when the client accesses port 8888 through Nginx, pass this request to port 5000 and return the response to the client!server { listen 8888; location / { proxy_pass http://localhost:5000/; } location /new { proxy_pass http://localhost:5000/outsider/; } }
Let's take a look at the complete code together!😁#
http {
server {
listen 5000;
root /path/to/nginx-demo/main/;
location /content {
root /path/to/nginx-demo/;
}
location /outsider {
root /path/temp-nginx/;
}
location ~ .md {
return 403;
}
}
server {
listen 8888;
location / {
proxy_pass http://localhost:5000/;
}
location /new {
proxy_pass http://localhost:5000/outsider/;
}
}
}
events {}
Run the code with sudo nginx
.
Additional Nginx Commands!#
-
Start the Nginx web server for the first time.
nginx #OR sudo nginx
-
Reload the running Nginx web server.
nginx -s reload #OR sudo nginx -s reload
-
Stop the running Nginx web server.
nginx -s stop #OR sudo nginx -s stop
-
Find out which Nginx processes are running in the system.
ps -ef | grep Nginx
The 4th command is very important; if the first three commands encounter errors, you can use the 4th command to find all running Nginx processes, then kill those processes and restart the Nginx service.
To kill a process, you need to know its PID first, then use the following command to kill it:
kill -9 <PID>
#OR
sudo kill -9 <PID>
Before concluding this article, I would like to state that the images and visuals I used are sourced from Google Images and the YouTube tutorial provided by Hussein Nasser.
This concludes our basic understanding and configuration of Nginx. If you are interested in advanced configurations of Nginx, please let me know through comments. Until then, enjoy the fun of programming and explore the magic of Nginx!👋
If you find any errors or areas for improvement in the translation, feel free to modify the translation and PR at Juejin Translation Program to earn corresponding reward points. The Permanent link to this article at the beginning is the Markdown link to this article on GitHub.
Juejin Translation Program is a community for translating high-quality internet technology articles, with sources from English sharing articles on Juejin. The content covers areas such as Android、iOS、Frontend、Backend、Blockchain、Product、Design、Artificial Intelligence and more. To see more high-quality translations, please continue to follow Juejin Translation Program, Official Weibo, Zhihu Column.