編寫Go程序?qū)ginx服務(wù)器進(jìn)行性能測試的方法
目前有很多提供Go語言HTTP應(yīng)用服務(wù)的方法,但其中最好的選擇取決于每個(gè)應(yīng)用的實(shí)際情況。目前,Nginx看起來是每個(gè)新項(xiàng)目的標(biāo)準(zhǔn)Web服務(wù)器,即使在有其他許多不錯(cuò)Web服務(wù)器的情況下。然而,在Nginx上提供Go應(yīng)用服務(wù)的開銷是多少呢?我們需要一些nginx的特性參數(shù)(vhosts,負(fù)載均衡,緩存,等等)或者直接使用Go提供服務(wù)?如果你需要nginx,最快的連接機(jī)制是什么?這就是在這我試圖回答的問題。該基準(zhǔn)測試的目的不是要驗(yàn)證Go比nginx的快或慢。那將會(huì)很愚蠢。
下面是我們要比較不同的設(shè)置:
- Go HTTP standalone (as the control group)
- Nginx proxy to Go HTTP
- Nginx fastcgi to Go TCP FastCGI
- Nginx fastcgi to Go Unix Socket FastCGI
硬件
因?yàn)槲覀儗⒃谙嗤挠布卤容^所有設(shè)置,硬件選擇的是廉價(jià)的一個(gè)。這不應(yīng)該是一個(gè)大問題。
- Samsung 筆記本 NP550P5C-AD1BR
- Intel Core i7 3630QM @2.4GHz (quad core, 8 threads)
- CPU caches: (L1: 256KiB, L2: 1MiB, L3: 6MiB)
- RAM 8GiB DDR3 1600MHz
軟件
- Ubuntu 13.10 amd64 Saucy Salamander (updated)
- Nginx 1.4.4 (1.4.4-1~saucy0 amd64)
- Go 1.2 (linux/amd64)
- wrk 3.0.4
設(shè)置
內(nèi)核
只需很小的一點(diǎn)調(diào)整,將內(nèi)核的limits調(diào)高。如果你對(duì)這一變量有更好的想法,請(qǐng)?jiān)趯懺谙旅嬖u(píng)論處:
fs.nr_open 9999999
net.core.netdev_max_backlog 4096
net.core.rmem_max 16777216
net.core.somaxconn 65535
net.core.wmem_max 16777216
net.ipv4.ip_forward 0
net.ipv4.ip_local_port_range 1025 65535
net.ipv4.tcp_fin_timeout 30
net.ipv4.tcp_keepalive_time 30
net.ipv4.tcp_max_syn_backlog 20480
net.ipv4.tcp_max_tw_buckets 400000
net.ipv4.tcp_no_metrics_save 1
net.ipv4.tcp_syn_retries 2
net.ipv4.tcp_synack_retries 2
net.ipv4.tcp_tw_recycle 1
net.ipv4.tcp_tw_reuse 1
vm.min_free_kbytes 65536
vm.overcommit_memory 1
Limits
供root和www-data打開的最大文件數(shù)限制被配置為200000。
Nginx
有幾個(gè)必需得Nginx調(diào)整。有人跟我說過,我禁用了gzip以保證比較公平。下面是它的配置文件/etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
worker_rlimit_nofile 200000;
pid /var/run/nginx.pid;
events {
worker_connections 10000;
use epoll;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 300;
keepalive_requests 10000;
types_hash_max_size 2048;
open_file_cache max=200000 inactive=300s;
open_file_cache_valid 300s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
server_tokens off;
dav_methods off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log combined;
error_log /var/log/nginx/error.log warn;
gzip off;
gzip_vary off;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*.conf;
}
Nginx vhosts
upstream go_http {
server 127.0.0.1:8080;
keepalive 300;
}
server {
listen 80;
server_name go.http;
access_log off;
error_log /dev/null crit;
location / {
proxy_pass http://go_http;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
upstream go_fcgi_tcp {
server 127.0.0.1:9001;
keepalive 300;
}
server {
listen 80;
server_name go.fcgi.tcp;
access_log off;
error_log /dev/null crit;
location / {
include fastcgi_params;
fastcgi_keep_conn on;
fastcgi_pass go_fcgi_tcp;
}
}
upstream go_fcgi_unix {
server unix:/tmp/go.sock;
keepalive 300;
}
server {
listen 80;
server_name go.fcgi.unix;
access_log off;
error_log /dev/null crit;
location / {
include fastcgi_params;
fastcgi_keep_conn on;
fastcgi_pass go_fcgi_unix;
}
}
Go源碼
package main
import (
"fmt"
"log"
"net"
"net/http"
"net/http/fcgi"
"os"
"os/signal"
"syscall"
)
var (
abort bool
)
const (
SOCK = "/tmp/go.sock"
)
type Server struct {
}
func (s Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
body := "Hello World\n"
// Try to keep the same amount of headers
w.Header().Set("Server", "gophr")
w.Header().Set("Connection", "keep-alive")
w.Header().Set("Content-Type", "text/plain")
w.Header().Set("Content-Length", fmt.Sprint(len(body)))
fmt.Fprint(w, body)
}
func main() {
sigchan := make(chan os.Signal, 1)
signal.Notify(sigchan, os.Interrupt)
signal.Notify(sigchan, syscall.SIGTERM)
server := Server{}
go func() {
http.Handle("/", server)
if err := http.ListenAndServe(":8080", nil); err != nil {
log.Fatal(err)
}
}()
go func() {
tcp, err := net.Listen("tcp", ":9001")
if err != nil {
log.Fatal(err)
}
fcgi.Serve(tcp, server)
}()
go func() {
unix, err := net.Listen("unix", SOCK)
if err != nil {
log.Fatal(err)
}
fcgi.Serve(unix, server)
}()
<-sigchan
if err := os.Remove(SOCK); err != nil {
log.Fatal(err)
}
}
檢查HTTP header
為公平起見,所有的請(qǐng)求必需大小相同。
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 12
Content-Type: text/plain
Server: gophr
Date: Sun, 15 Dec 2013 14:59:14 GMT
$ curl -sI http://127.0.0.1:8080/ | wc -c
141
$ curl -sI http://go.http/
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 15 Dec 2013 14:59:31 GMT
Content-Type: text/plain
Content-Length: 12
Connection: keep-alive
$ curl -sI http://go.http/ | wc -c
141
$ curl -sI http://go.fcgi.tcp/
HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 12
Connection: keep-alive
Date: Sun, 15 Dec 2013 14:59:40 GMT
Server: gophr
$ curl -sI http://go.fcgi.tcp/ | wc -c
141
$ curl -sI http://go.fcgi.unix/
HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 12
Connection: keep-alive
Date: Sun, 15 Dec 2013 15:00:15 GMT
Server: gophr
$ curl -sI http://go.fcgi.unix/ | wc -c
141
啟動(dòng)引擎
- 使用sysctl配置內(nèi)核
- 配置Nginx
- 配置Nginx vhosts
- 用www-data啟動(dòng)服務(wù)
- 運(yùn)行基準(zhǔn)測試
基準(zhǔn)測試
GOMAXPROCS = 1
Go standalone
# wrk -t100 -c5000 -d30s http://127.0.0.1:8080/
Running 30s test @ http://127.0.0.1:8080/
100 threads and 5000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 116.96ms 17.76ms 173.96ms 85.31%
Req/Sec 429.16 49.20 589.00 69.44%
1281567 requests in 29.98s, 215.11MB read
Requests/sec: 42745.15
Transfer/sec: 7.17MB
Nginx + Go through HTTP
# wrk -t100 -c5000 -d30s http://go.http/
Running 30s test @ http://go.http/
100 threads and 5000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 124.57ms 18.26ms 209.70ms 80.17%
Req/Sec 406.29 56.94 0.87k 89.41%
1198450 requests in 29.97s, 201.16MB read
Requests/sec: 39991.57
Transfer/sec: 6.71MB
Nginx + Go through FastCGI TCP
# wrk -t100 -c5000 -d30s http://go.fcgi.tcp/
Running 30s test @ http://go.fcgi.tcp/
100 threads and 5000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 514.57ms 119.80ms 1.21s 71.85%
Req/Sec 97.18 22.56 263.00 79.59%
287416 requests in 30.00s, 48.24MB read
Socket errors: connect 0, read 0, write 0, timeout 661
Requests/sec: 9580.75
Transfer/sec: 1.61MB
Nginx + Go through FastCGI Unix Socket
# wrk -t100 -c5000 -d30s http://go.fcgi.unix/
Running 30s test @ http://go.fcgi.unix/
100 threads and 5000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 425.64ms 80.53ms 925.03ms 76.88%
Req/Sec 117.03 22.13 255.00 81.30%
350162 requests in 30.00s, 58.77MB read
Socket errors: connect 0, read 0, write 0, timeout 210
Requests/sec: 11670.72
Transfer/sec: 1.96MB
GOMAXPROCS = 8
Go standalone
# wrk -t100 -c5000 -d30s http://127.0.0.1:8080/
Running 30s test @ http://127.0.0.1:8080/
100 threads and 5000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 39.25ms 8.49ms 86.45ms 81.39%
Req/Sec 1.29k 129.27 1.79k 69.23%
3837995 requests in 29.89s, 644.19MB read
Requests/sec: 128402.88
Transfer/sec: 21.55MB
Nginx + Go through HTTP
Running 30s test @ http://go.http/
100 threads and 5000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 336.77ms 297.88ms 632.52ms 60.16%
Req/Sec 2.36k 2.99k 19.11k 84.83%
2232068 requests in 29.98s, 374.64MB read
Requests/sec: 74442.91
Transfer/sec: 12.49MB
Nginx + Go through FastCGI TCP
# wrk -t100 -c5000 -d30s http://go.fcgi.tcp/
Running 30s test @ http://go.fcgi.tcp/
100 threads and 5000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 217.69ms 121.22ms 1.80s 75.14%
Req/Sec 263.09 102.78 629.00 62.54%
721027 requests in 30.01s, 121.02MB read
Socket errors: connect 0, read 0, write 176, timeout 1343
Requests/sec: 24026.50
Transfer/sec: 4.03MB
Nginx + Go through FastCGI Unix Socket
# wrk -t100 -c5000 -d30s http://go.fcgi.unix/
Running 30s test @ http://go.fcgi.unix/
100 threads and 5000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 694.32ms 332.27ms 1.79s 62.13%
Req/Sec 646.86 669.65 6.11k 87.80%
909836 requests in 30.00s, 152.71MB read
Requests/sec: 30324.77
Transfer/sec: 5.09MB
結(jié)論
第一組基準(zhǔn)測試時(shí)一些Nginx的設(shè)置還沒有很好的優(yōu)化(啟用gzip,Go的后端沒有使用keep-alive連接)。當(dāng)改為wrk以及按建議優(yōu)化Nginx后結(jié)果有較大差異。
當(dāng)GOMAXPROCS=1時(shí),Nginx的開銷不是那么大,但當(dāng)OMAXPROCS=8時(shí)差異就很大了。以后可能會(huì)再試一下其他設(shè)置。如果你需要使用Nginx像虛擬主機(jī),負(fù)載均衡,緩存等特性,使用HTTP proxy,別使用FastCGI。有些人說Go的FastCGI還沒有被很好優(yōu)化,這也許就是測試結(jié)果中巨大差異的原因。
- Go語言基于Socket編寫服務(wù)器端與客戶端通信的實(shí)例
- Go語言實(shí)現(xiàn)socket實(shí)例
- 服務(wù)器端Go程序?qū)﹂L短鏈接的處理及運(yùn)行參數(shù)的保存
- Centos5.4+Nginx-0.8.50+UWSGI-0.9.6.2+Django-1.2.3搭建高性能WEB服務(wù)器
- 在Apache服務(wù)器上同時(shí)運(yùn)行多個(gè)Django程序的方法
- 在Go程序中實(shí)現(xiàn)服務(wù)器重啟的方法
- C++、python和go語言實(shí)現(xiàn)的簡單客戶端服務(wù)器代碼示例
- go語言實(shí)現(xiàn)一個(gè)最簡單的http文件服務(wù)器實(shí)例
- Go語言Echo服務(wù)器的方法
- Go語言實(shí)現(xiàn)簡單Web服務(wù)器的方法
- Go語言服務(wù)器開發(fā)實(shí)現(xiàn)最簡單HTTP的GET與POST接口
- Go語言服務(wù)器開發(fā)之客戶端向服務(wù)器發(fā)送數(shù)據(jù)并接收返回?cái)?shù)據(jù)的方法
- 剖析Go編寫的Socket服務(wù)器模塊解耦及基礎(chǔ)模塊的設(shè)計(jì)
相關(guān)文章
HipChat上傳文件報(bào)未知錯(cuò)誤的原因分析及解決方案
HipChat的功能類似于Campfire、Sazneo等在線協(xié)同工具,并且和Yammer以及Salesforce的Chatter等企業(yè)社交平臺(tái)有一定相似之處。你可以為單個(gè)項(xiàng)目或者小組搭建自有的聊天室,也可以很方便的發(fā)起一對(duì)一聊天2016-01-01一文教會(huì)你使用Nginx訪問日志統(tǒng)計(jì)PV與UV
做網(wǎng)站的都知道,平常經(jīng)常要查詢下網(wǎng)站PV、UV等網(wǎng)站的訪問數(shù)據(jù),所以下面這篇文章主要給大家介紹了關(guān)于如何使用Nginx訪問日志統(tǒng)計(jì)PV與UV的相關(guān)資料,文中通過實(shí)例代碼介紹的非常詳細(xì),需要的朋友可以參考下2022-05-05Debian下搭建Nginx和Tomcat服務(wù)器實(shí)現(xiàn)負(fù)載均衡的方案
這篇文章主要介紹了Debian下搭建Nginx和Tomcat服務(wù)器實(shí)現(xiàn)負(fù)載均衡的方案,其主要思想依然是動(dòng)靜分離并且以Nginx來進(jìn)行反向代理這樣的路子,需要的朋友可以參考下2015-12-12Nginx下Wordpress的永久鏈接實(shí)現(xiàn)(301,404等)
經(jīng)過多番測試,終于在nginx下實(shí)現(xiàn)了rewrite的功能,WrodPress的永久鏈接終于生效了2012-09-09Windows下使用?Nginx?搭建?HTTP文件服務(wù)器?實(shí)現(xiàn)文件下載功能
Nginx?是一款輕量級(jí)的?HTTP?服務(wù)器,采用事件驅(qū)動(dòng)的異步非阻塞處理方式框架,這讓其具有極好的?IO?性能,時(shí)常用于服務(wù)端的反向代理和負(fù)載均衡,這篇文章主要介紹了Windows下使用?Nginx?搭建?HTTP文件服務(wù)器實(shí)現(xiàn)文件下載功能,需要的朋友可以參考下2023-03-03Nginx服務(wù)器配置HTTPS nginx.config 配置文件(教程)
下面小編就為大家分享一篇Nginx服務(wù)器配置HTTPS nginx.config 配置文件(教程),具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2017-12-12