Hexo


  • 首页

  • 关于

  • 标签

  • 分类

  • 归档

  • 日程表

linux常用命令

发表于 2019-12-28 | 分类于 Linux
字数统计: | 阅读时长 ≈

awk

dmesg

dmesg 是一个显示内核缓冲区系统控制信息的工具;比如系统在启动时的信息会写到/var/log/

dd

dd -if=xxxx of=xxxx bs =1M
dd if=ideaIU-2018.3.5.tar.gz of=ideaIU-2018.3.5.tar.gz_bak bs=1M
记录了656+1 的读入
记录了656+1 的写出
688141905 bytes (688 MB, 656 MiB) copied, 3.01494 s, 228 MB/s
使用dd制作u盘启动盘,if(inputfile) of(outputfile)一般跟路径,bs每次拷贝的块大小
dd拷贝文件是无法看到进度的,可以使用
watch -n 5 killall -USR1 dd
每5s报告一次进度
dd if=ideaIU-2018.3.5.tar.gz_bak of=ideaIU-2018.3.5.tar.gz1 bs=1
记录了8549812+0 的读入
记录了8549811+0 的写出
8549811 bytes (8.5 MB, 8.2 MiB) copied, 17.4652 s, 490 kB/s
记录了10933004+0 的读入
记录了10933003+0 的写出
10933003 bytes (11 MB, 10 MiB) copied, 22.4725 s, 487 kB/s
​
LUKS (Linux Unified Key Setup)是 Linux 硬盘加密的标准, 不依赖于操作系统的磁盘加密

parted 磁盘分区工具

文件系统级格式化 mkfs

Linux mkfs命令用于在特定的分区上建立 linux 文件系统

e2label 设置卷标

e2label是用来查看和指定磁盘卷标的

连接池管理

发表于 2019-12-27 | 分类于 go
字数统计: | 阅读时长 ≈

https://www.jianshu.com/p/8e0bfed0bb90
https://www.ithome.io/b/a652bcbf-45fd-e01a-f52b-d2e3179a6d92.html
http://luodw.cc/2016/08/28/golang02/

字符串拼接方式性能对比

发表于 2019-12-27 | 分类于 go
字数统计: | 阅读时长 ≈

测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package performance_test

import (
"bytes"
"fmt"
"strings"
"testing"
)

const v string = "ni shuo wo shi bu shi tai wu liao le a?"

func BenchmarkTest1(b *testing.B) {
var s string
for i := 0; i < b.N; i++ {
s = fmt.Sprintf("%s[%s]", s, v)
}
}

func BenchmarkTest2(b *testing.B) {
var s string
for i := 0; i < b.N; i++ {
s = strings.Join([]string{s, "[", v, "]"}, "")
}
}

func BenchmarkTest3(b *testing.B) {
buf := bytes.Buffer{}
for i := 0; i < b.N; i++ {
buf.WriteString("[")
buf.WriteString(v)
buf.WriteString("]")
}
}

func BenchmarkTest4(b *testing.B) {
var s string
for i := 0; i < b.N; i++ {
s = s + "[" + v + "]"
}
}

go test -v -test.bench=”.” -benchmem String_concatenation_test.go

1
2
3
4
5
6
7
8
goos: darwin
goarch: amd64
BenchmarkTest1-4 50000 171597 ns/op 2063850 B/op 4 allocs/op
BenchmarkTest2-4 50000 137827 ns/op 1029071 B/op 1 allocs/op
BenchmarkTest3-4 20000000 85.8 ns/op 86 B/op 0 allocs/op
BenchmarkTest4-4 50000 87791 ns/op 1029071 B/op 1 allocs/op
PASS
ok command-line-arguments 22.715s

结论

  • fmt.Sprintf 和 strings.Join 速度相当
  • string + 比上述二者快一倍
  • bytes.Buffer又比上者快约400-500倍

参考资料:

golang 几种常见的字符串连接性能比较

go连接mysql

发表于 2019-12-27 | 分类于 go
字数统计: | 阅读时长 ≈

如何让golang mysql驱动程序在2秒内超时ping

使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
package db_conn_pool

import (
"context"
"database/sql"
"fmt"
_ "github.com/go-sql-driver/mysql"
"log"
"testing"
"time"
)

const (
user = "root"
pw = "root"
ip = "127.0.0.1"
port = 3306
database = "test"
)

var (
ctx context.Context
db *sql.DB
)

func init() {
ctx = context.Background()
db, _ = sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
}

func Test_PingContext(t *testing.T) {

ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()

status := "up"
if err := db.PingContext(ctx); err != nil {
status = "down"
}
log.Println(status)
}

func Test_Ping(t *testing.T) {
status := "up"
if err := db.Ping(); err != nil {
status = "down"
}
log.Println(status)
}

当数据访问不通时(存在防火墙), Test_PingContext 可以在特定1s超时自动返回错误,但是Test_Ping会等待很久没有返回才超时

1
2
3
4
5
6
7
8
9
10
11
12
13
14
go test -v db_pool_test.go             
=== RUN Test_PingContext
2019/12/27 12:10:48 start
2019/12/27 12:10:49 down
--- PASS: Test_PingContext (1.00s)
=== RUN Test_Ping
2019/12/27 12:10:49 start
[mysql] 2019/12/27 12:11:04 packets.go:36: unexpected EOF
[mysql] 2019/12/27 12:11:19 packets.go:36: unexpected EOF
[mysql] 2019/12/27 12:11:34 packets.go:36: unexpected EOF
2019/12/27 12:11:34 down
--- PASS: Test_Ping (45.45s)
PASS
ok command-line-arguments 46.471s

配置 sql.DB 获得更好的性能

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
package db_conn_pool

import (
"context"
"database/sql"
"fmt"
"testing"
"time"
)

const (
user = "root"
pw = "root"
ip = "127.0.0.1"
port = 3306
database = "test"
)

func insertRecord(b *testing.B, db *sql.DB) {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()

_, err := db.ExecContext(ctx, "INSERT INTO isbns(value) VALUES ('978-3-598-21500-1')")
if err != nil {
b.Fatal(err)
}
}

func BenchmarkMaxOpenConns1(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetMaxOpenConns(1)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkMaxOpenConns2(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetMaxOpenConns(2)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkMaxOpenConns5(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetMaxOpenConns(5)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkMaxOpenConns10(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetMaxOpenConns(10)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkMaxOpenConnsUnlimited(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkMaxIdleConnsNone(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetMaxIdleConns(0)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkMaxIdleConns1(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetMaxIdleConns(1)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkMaxIdleConns2(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetMaxIdleConns(2)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkMaxIdleConns5(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetMaxIdleConns(5)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkMaxIdleConns10(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetMaxIdleConns(10)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkConnMaxLifetimeUnlimited(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetConnMaxLifetime(0)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkConnMaxLifetime1000(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetConnMaxLifetime(1000 * time.Millisecond)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkConnMaxLifetime500(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetConnMaxLifetime(500 * time.Millisecond)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkConnMaxLifetime200(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetConnMaxLifetime(200 * time.Millisecond)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

func BenchmarkConnMaxLifetime100(b *testing.B) {
db, err := sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
if err != nil {
b.Fatal(err)
}
db.SetConnMaxLifetime(100 * time.Millisecond)
defer db.Close()

b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
insertRecord(b, db)
}
})
}

SetMaxOpenConns 方法

默认情况下,可以同时打开的连接数没有限制。但您可以通过setMaxOpenConns()方法实现自己的限制

go test -test.bench=”BenchmarkMaxOpenConns*” -benchmem sql_test.go

1
2
3
4
5
6
7
8
9
goos: darwin
goarch: amd64
BenchmarkMaxOpenConns1-4 1000 1234253 ns/op 574 B/op 14 allocs/op
BenchmarkMaxOpenConns2-4 2000 635522 ns/op 564 B/op 14 allocs/op
BenchmarkMaxOpenConns5-4 5000 263964 ns/op 436 B/op 12 allocs/op
BenchmarkMaxOpenConns10-4 5000 278689 ns/op 444 B/op 12 allocs/op
BenchmarkMaxOpenConnsUnlimited-4 5000 270506 ns/op 441 B/op 12 allocs/op
PASS
ok command-line-arguments 6.948s

如上基准测试,在设置最大开放连接为1、2、5、10和无限制,插入花费的时间越来越少

SetMaxIdleConns 方法

默认情况下,sql.DB允许在连接池中最多保留2个空闲连接。您可以通过SetMaxIdleConns()方法进行更改,理论上,在池中允许更多的空闲连接将提高性能,因为这样可以减少从头开始建立新连接的可能性,从而有助于节省资源。

go test -test.bench=”BenchmarkMaxIdleConns*” -benchmem sql_test.go

1
2
3
4
5
6
7
8
9
goos: darwin
goarch: amd64
BenchmarkMaxIdleConnsNone-4 1000 1643194 ns/op 7109 B/op 49 allocs/op
BenchmarkMaxIdleConns1-4 5000 273001 ns/op 505 B/op 12 allocs/op
BenchmarkMaxIdleConns2-4 5000 278861 ns/op 446 B/op 12 allocs/op
BenchmarkMaxIdleConns5-4 5000 293819 ns/op 430 B/op 12 allocs/op
BenchmarkMaxIdleConns10-4 5000 290196 ns/op 430 B/op 12 allocs/op
PASS
ok command-line-arguments 8.670s

让我们来看看相同的基准,最大空闲连接设置为无、1、2、5和10(并且开放连接的数量是无限的)

go test -test.bench=”BenchmarkConnMaxLifetime*” -benchmem sql_test.go

1
2
3
4
5
6
7
8
9
goos: darwin
goarch: amd64
BenchmarkConnMaxLifetimeUnlimited-4 5000 266796 ns/op 442 B/op 12 allocs/op
BenchmarkConnMaxLifetime1000-4 5000 272057 ns/op 448 B/op 12 allocs/op
BenchmarkConnMaxLifetime500-4 5000 281033 ns/op 460 B/op 12 allocs/op
BenchmarkConnMaxLifetime200-4 5000 271473 ns/op 464 B/op 12 allocs/op
BenchmarkConnMaxLifetime100-4 5000 284376 ns/op 507 B/op 12 allocs/op
PASS
ok command-line-arguments 8.005s

问题

连接池什么时候销毁IdleConns

比如说maxConns是10,maxIdleConns是5,一次10个并发请求出现,连接池一次性建立了10个请求,当查询

连接池的连接管理

参考文档

配置 sql.DB 获得更好的性能

init函数

发表于 2019-12-26 | 分类于 go
字数统计: | 阅读时长 ≈

context

发表于 2019-12-26 | 分类于 go
字数统计: | 阅读时长 ≈

Context

在实际的业务种,我们可能会有这么一种场景:需要我们主动的通知某一个goroutine结束。比如我们开启一个后台goroutine一直做事情,比如监控,现在不需要了,就需要通知这个监控goroutine结束,不然它会一直跑,就泄漏了。

chan通知

我们都知道一个goroutine启动后,我们是无法控制他的,大部分情况是等待它自己结束,那么如果这个goroutine是一个不会自己结束的后台goroutine呢?比如监控等,会一直运行的。

这种情况化,一直傻瓜式的办法是全局变量,其他地方通过修改这个变量完成结束通知,然后后台goroutine不停的检查这个变量,如果发现被通知关闭了,就自我结束。

这种方式也可以,但是首先我们要保证这个变量在多线程下的安全,基于此,有一种更好的方式:chan + select 。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
func main() {
stop := make(chan bool)

go func() {
for {
select {
case <-stop:
fmt.Println("监控退出,停止了...")
return
default:
fmt.Println("goroutine监控中...")
time.Sleep(2 * time.Second)
}
}
}()

time.Sleep(10 * time.Second)
fmt.Println("可以了,通知监控停止")
stop<- true
//为了检测监控过是否停止,如果没有监控输出,就表示停止了
time.Sleep(5 * time.Second)

}

这种chan+select的方式,是比较优雅的结束一个goroutine的方式,不过这种方式也有局限性,如果有很多goroutine都需要控制结束怎么办呢?如果这些goroutine又衍生了其他更多的goroutine怎么办呢?如果一层层的无穷尽的goroutine呢?这就非常复杂了,即使我们定义很多chan也很难解决这个问题,因为goroutine的关系链就导致了这种场景非常复杂

初识Context

上面说的这种场景是存在的,比如一个网络请求Request,每个Request都需要开启一个goroutine做一些事情,这些goroutine又可能会开启其他的goroutine。所以我们需要一种可以跟踪goroutine的方案,才可以达到控制他们的目的,这就是Go语言为我们提供的Context,称之为上下文非常贴切,它就是goroutine的上下文。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
func main() {
ctx, cancel := context.WithCancel(context.Background())
go func(ctx context.Context) {
for {
select {
case <-ctx.Done():
fmt.Println("监控退出,停止了...")
return
default:
fmt.Println("goroutine监控中...")
time.Sleep(2 * time.Second)
}
}
}(ctx)

time.Sleep(10 * time.Second)
fmt.Println("可以了,通知监控停止")
cancel()
//为了检测监控过是否停止,如果没有监控输出,就表示停止了
time.Sleep(5 * time.Second)

}

context.Background() 返回一个空的Context,这个空的Context一般用于整个Context树的根节点。然后我们使用context.WithCancel(parent)函数,创建一个可取消的子Context,然后当作参数传给goroutine使用,这样就可以使用这个子Context跟踪这个goroutine。

在goroutine中,使用select调用<-ctx.Done()判断是否要结束,如果接受到值的话,就可以返回结束goroutine了;如果接收不到,就会继续进行监控。

Context控制多个goroutine

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
func Test_context1(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
go watch(ctx, "【监控1】")
go watch(ctx, "【监控2】")
go watch(ctx, "【监控3】")

time.Sleep(10 * time.Second)
fmt.Println("可以了,通知监控停止")
cancel()
//为了检测监控过是否停止,如果没有监控输出,就表示停止了
time.Sleep(5 * time.Second)
}

func watch(ctx context.Context, name string) {
for {
select {
case <-ctx.Done():
fmt.Println(name, "监控退出,停止了...")
return
default:
fmt.Println(name, "goroutine监控中...")
time.Sleep(2 * time.Second)
}
}
}

示例中启动了3个监控goroutine进行不断的监控,每一个都使用了Context进行跟踪,当我们使用cancel函数通知取消时,这3个goroutine都会被结束。这就是Context的控制能力,它就像一个控制器一样,按下开关后,所有基于这个Context或者衍生的子Context都会收到通知,这时就可以进行清理操作了,最终释放goroutine,这就优雅的解决了goroutine启动后不可控的问题。

父子Context控制规则

cancel函数可以取消一个Context,以及这个节点Context下所有的所有的Context,不管有多少层级。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
func Test_context(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
go sonContext(ctx)
go watch(ctx, "【监控4】")
time.Sleep(2 * time.Second)
cancel()
time.Sleep(5 * time.Second)
}
func sonContext(ctx context.Context) {
ctx, _ = context.WithCancel(ctx)
go watch(ctx, "【监控1】")
go watch(ctx, "【监控2】")
go watch(ctx, "【监控3】")

time.Sleep(10 * time.Second)
}

func watch(ctx context.Context, name string) {
for {
select {
case <-ctx.Done():
fmt.Println(name, "监控退出,停止了...")
return
default:
fmt.Println(name, "goroutine监控中...")
time.Sleep(2 * time.Second)
}
}
}

父Context是由context.Background()创建的一个空的context,做为根Context, 将根Context传入sonContext中,sonContext通过WithCancel()方法生成子Context和子cancel方法,上面例子通过父cancel方法关闭了自己以及子context

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
func Test_context(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
go sonContext(ctx)
go watch(ctx, "【监控4】")
time.Sleep(15 * time.Second)
cancel()
time.Sleep(5 * time.Second)

}

func sonContext(ctx context.Context) {
ctx, cancel := context.WithCancel(ctx)
go watch(ctx, "【监控1】")
go watch(ctx, "【监控2】")
go watch(ctx, "【监控3】")

time.Sleep(5 * time.Second)
fmt.Println("可以了,通知监控停止")
cancel()
//为了检测监控过是否停止,如果没有监控输出,就表示停止了
time.Sleep(5 * time.Second)
}

func watch(ctx context.Context, name string) {
for {
select {
case <-ctx.Done():
fmt.Println(name, "监控退出,停止了...")
return
default:
fmt.Println(name, "goroutine监控中...")
time.Sleep(2 * time.Second)
}
}
}

在sonContext中调用子cancel()方法只能关闭当前context

Context接口

1
2
3
4
5
6
7
8
9
type Context interface {
Deadline() (deadline time.Time, ok bool)

Done() <-chan struct{}

Err() error

Value(key interface{}) interface{}
}

Context的继承衍生

1
2
3
4
func WithCancel(parent Context) (ctx Context, cancel CancelFunc)
func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc)
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc)
func WithValue(parent Context, key, val interface{}) Context

WithTimeout和WithDeadline

WithTimeout和WithDeadline基本上一样,这个表示是超时自动取消,是多少时间后自动取消Context的意思

Context 使用原则

  • 不要把Context放在结构体中,要以参数的方式传递
  • 以Context作为参数的函数方法,应该把Context作为第一个参数,放在第一位。
  • 给一个函数方法传递Context的时候,不要传递nil,如果不知道传递什么,就使用context.TODO
  • Context的Value相关方法应该传递必须的数据,不要什么数据都使用这个传递
  • Context是线程安全的,可以放心的在多个goroutine中传递

参考文档

Go Concurrency Patterns: Context
Go语言实战笔记(二十)| Go Context

sync

发表于 2019-12-26 | 分类于 go
字数统计: | 阅读时长 ≈

WaitGroup

WaitGroup的用途:它能够一直等到所有的goroutine执行完成,并且阻塞主线程的执行,直到所有的goroutine执行完成。

mysql两阶段提交

发表于 2019-12-25 | 分类于 mysql
字数统计: | 阅读时长 ≈
  1. 一个事物10000行,两阶段提交是等innodb提交完才写入吗?
1
2
3
4
5
6
7
8
9
10
11
12
13

CREATE TABLE `test_binlog_cache` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增主键ID',
`msg` varchar(1024) NOT NULL COMMENT '数据',
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;

// 产生一个新的binlog日志文件
flush logs;
// 重置所有状态值
flush status;

insert into test_binlog_cache(msg) SELECT REPEAT("xxxxxxxx", 128);
  1. binlog写入时间点.
  1. 若干个大事物同时写不同表,mysql内存使用过大

  2. 只有commit的时候才会写binlog

LOCK BINLOG FOR BACKUP

锁住binlog(Percona server支持,原生不支持),防止新的事物提交。

事物1 事物2
begin;insert ….
LOCK BINLOG FOR BACKUP
commit(Waiting for binlog lock )
UNLOCK BINLOG
执行成功

事物1会处于: Waiting for binlog lock 状态,直到释放binlog锁才会释放成功

binlog日志中的时间戳

单个事物

1
flush logs;begin;select now();select sleep(10);select now();insert into test_binlog_cache(msg) SELECT REPEAT("xxxxxxxx", 128);select sleep(10);select now();insert into test_binlog_cache(msg) SELECT REPEAT("xxxxxxxx", 128);select sleep(10);select now();commit;

mysqlbinlog –base64-output=decode-rows -v mysql-bin.000032

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
BEGIN
/*!*/;
# at 312
#191225 11:41:34 server id 18405 end_log_pos 376 CRC32 0xe3dbfa21 Table_map: `test7`.`test_binlog_cache` mapped to number 10560
# at 376
#191225 11:41:34 server id 18405 end_log_pos 1446 CRC32 0xeb5c669e Write_rows: table id 10560 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=54
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 1446
#191225 11:41:44 server id 18405 end_log_pos 1510 CRC32 0xade1e2bf Table_map: `test7`.`test_binlog_cache` mapped to number 10560
# at 1510
#191225 11:41:44 server id 18405 end_log_pos 2580 CRC32 0xdcef82a1 Write_rows: table id 10560 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=55
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 2580
#191225 11:41:54 server id 18405 end_log_pos 2611 CRC32 0x562d77a7 Xid = 29723285
COMMIT/*!*/;
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog *//*!*/;
DELIMITER ;

可以看到binlog中的时间戳是生成event日志时的时间戳.并不是commit时的时间戳

多个事物

事物1:

1
flush logs;begin;select now();select sleep(10);select now();insert into test_binlog_cache(msg) SELECT REPEAT("xxxxxxxx", 128);select sleep(10);select now();insert into test_binlog_cache(msg) SELECT REPEAT("xxxxxxxx", 128);select sleep(10);select now();commit;

事物2:

1
select sleep(5);begin;select now();select sleep(10);select now();insert into test_binlog_cache(msg) SELECT REPEAT("xxxxxxxx", 128);select sleep(10);select now();insert into test_binlog_cache(msg) SELECT REPEAT("xxxxxxxx", 128);select sleep(10);select now();commit;

mysqlbinlog –base64-output=decode-rows -v mysql-bin.000033

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
BEGIN
/*!*/;
# at 312
#191225 11:54:38 server id 18405 end_log_pos 376 CRC32 0x3d6d358c Table_map: `test7`.`test_binlog_cache` mapped to number 10560
# at 376
#191225 11:54:38 server id 18405 end_log_pos 1446 CRC32 0x255da85d Write_rows: table id 10560 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=60
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 1446
#191225 11:54:48 server id 18405 end_log_pos 1510 CRC32 0xf0814c50 Table_map: `test7`.`test_binlog_cache` mapped to number 10560
# at 1510
#191225 11:54:48 server id 18405 end_log_pos 2580 CRC32 0x12e6fcae Write_rows: table id 10560 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=62
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 2580
#191225 11:54:58 server id 18405 end_log_pos 2611 CRC32 0xf1dbf641 Xid = 29725418
COMMIT/*!*/;
# at 2611
#191225 11:55:03 server id 18405 end_log_pos 2659 CRC32 0x8be6c2e5 GTID [commit=yes]
SET @@SESSION.GTID_NEXT= '61bef212-b1ae-11e8-a427-fa163ecf322b:1833632'/*!*/;
# at 2659
#191225 11:54:43 server id 18405 end_log_pos 2732 CRC32 0x45daa78f Query thread_id=6465113 exec_time=0 error_code=0
SET TIMESTAMP=1577246083/*!*/;
BEGIN
/*!*/;
# at 2732
#191225 11:54:43 server id 18405 end_log_pos 2796 CRC32 0x57d6e77f Table_map: `test7`.`test_binlog_cache` mapped to number 10560
# at 2796
#191225 11:54:43 server id 18405 end_log_pos 3866 CRC32 0x2a2fa23e Write_rows: table id 10560 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=61
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 3866
#191225 11:54:53 server id 18405 end_log_pos 3930 CRC32 0x224edaf4 Table_map: `test7`.`test_binlog_cache` mapped to number 10560
# at 3930
#191225 11:54:53 server id 18405 end_log_pos 5000 CRC32 0x70bba4ea Write_rows: table id 10560 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=63
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 5000
#191225 11:55:03 server id 18405 end_log_pos 5031 CRC32 0x94725565 Xid = 29725429
COMMIT/*!*/;
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog *//*!*/;
DELIMITER ;
# End of log file
ROLLBACK /* added by mysqlbinlog */;
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

可以看到

  • binlog的写入是顺序写入的,第一个事物先commit所以先写binlog,但是binlog中的时间戳是和每次插入时间有关,所以能看到前面的binlog时间戳大于后面的binlog时间戳
  • GTID和xid类型的event是在事务执行commit语句时产生的
  • Query、Rows_query、Table_map、Update_rows 类型的event是在事务执行update 语句时产生的
  • 执行begin;语句时未产生任何event

主从复制,relay log中的时间戳和从库binlog中的时间戳

relay log中的时间戳是主库写入的时间戳,这个很好理解,但是从库binlog中的时间戳却还是主库写入的时间戳

mysqlbinlog –base64-output=decode-rows -v mysql-bin.000001

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
use `test7`/*!*/;
SET TIMESTAMP=1577250016/*!*/;
CREATE TABLE `test_binlog_cache` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增主键ID',

`msg` varchar(1024) NOT NULL COMMENT '数据',

PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8
/*!*/;
# at 657
#191225 11:54:38 server id 18405 end_log_pos 705 CRC32 0x4ad78495 GTID last_committed=0 sequence_number=0 rbr_only=no
SET @@SESSION.GTID_NEXT= '61bef212-b1ae-11e8-a427-fa163ecf322b:1833631'/*!*/;
# at 705
#191225 11:54:38 server id 18405 end_log_pos 768 CRC32 0x28f82f1e Query thread_id=6464423 exec_time=3958 error_code=0
SET TIMESTAMP=1577246078/*!*/;
SET @@session.sql_mode=524288/*!*/;
/*!\C latin1 *//*!*/;
SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=45/*!*/;
BEGIN
/*!*/;
# at 768
#191225 11:54:38 server id 18405 end_log_pos 832 CRC32 0x964a2036 Table_map: `test7`.`test_binlog_cache` mapped to number 72
# at 832
#191225 11:54:38 server id 18405 end_log_pos 1902 CRC32 0xada61dcf Write_rows: table id 72 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=60
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 1902
#191225 11:54:48 server id 18405 end_log_pos 1966 CRC32 0xbc5e0681 Table_map: `test7`.`test_binlog_cache` mapped to number 72
# at 1966
#191225 11:54:48 server id 18405 end_log_pos 3036 CRC32 0xebfbc078 Write_rows: table id 72 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=62
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 3036
#191225 11:54:48 server id 18405 end_log_pos 3067 CRC32 0x7981f1f8 Xid = 39
COMMIT/*!*/;
# at 3067
#191225 11:54:43 server id 18405 end_log_pos 3115 CRC32 0x1d339bf4 GTID last_committed=0 sequence_number=0 rbr_only=no
SET @@SESSION.GTID_NEXT= '61bef212-b1ae-11e8-a427-fa163ecf322b:1833632'/*!*/;
# at 3115
#191225 11:54:43 server id 18405 end_log_pos 3178 CRC32 0xf7ae95e2 Query thread_id=6465113 exec_time=3953 error_code=0
SET TIMESTAMP=1577246083/*!*/;
BEGIN
/*!*/;
# at 3178
#191225 11:54:43 server id 18405 end_log_pos 3242 CRC32 0x6e13af16 Table_map: `test7`.`test_binlog_cache` mapped to number 72
# at 3242
#191225 11:54:43 server id 18405 end_log_pos 4312 CRC32 0xf6d98683 Write_rows: table id 72 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=61
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 4312
#191225 11:54:53 server id 18405 end_log_pos 4376 CRC32 0xbcedc966 Table_map: `test7`.`test_binlog_cache` mapped to number 72
# at 4376
#191225 11:54:53 server id 18405 end_log_pos 5446 CRC32 0xa6b6cb88 Write_rows: table id 72 flags: STMT_END_F
### INSERT INTO `test7`.`test_binlog_cache`
### SET
### @1=63
### @2='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# at 5446
#191225 11:54:53 server id 18405 end_log_pos 5477 CRC32 0x8df2b849 Xid = 42
COMMIT/*!*/;

遇到问题:

  1. mysqlbinlog5.5解析mysql5.7 binlog文件出现
  • 现象

    ERROR: Error in Log_event::read_log_event(): ‘Sanity check failed’, data_len: 31, event_type: 35ERROR: Could not read entry at offset 123: Error in log format or read error.

  • 原因分析
    mysql5.6等高版本binlog文件增加了新的binlog event,如gtid event等。
    mysql5.5版本的mysqlbinlog是识别不了这样的binlog event的。
  • 解决办法:
    使用对应的mysqlbinlog

https://yq.aliyun.com/articles/576704

go测试

发表于 2019-12-19 | 分类于 go
字数统计: | 阅读时长 ≈

单元测试

使用init方法初始化一些公用变量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
package db_conn_pool

import (
"context"
"database/sql"
"fmt"
_ "github.com/go-sql-driver/mysql"
"log"
"testing"
"time"
)

const (
user = "root"
pw = "root"
ip = "127.0.0.1"
port = 3306
database = "test"
)

var (
ctx context.Context
db *sql.DB
)

func init() {
ctx = context.Background()
db, _ = sql.Open("mysql", user+":"+pw+"@tcp("+ip+":"+fmt.Sprintf("%d", port)+")/"+database+"?charset=utf8&autocommit=true&timeout=1s")
}

func Test_PingContext(t *testing.T) {

ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()

status := "up"
if err := db.PingContext(ctx); err != nil {
status = "down"
}
log.Println(status)
}

使用Go Test测试单个文件和单个方法

比如我们go get一个packet想跑下它的单元测试,可以cd到对应目录执行如下命令

测试单个方法

go test -v atomic_test.go -test.run TestSwapInt32

测试单个文件

go test -v atomic_test.go

异常捕获

发表于 2019-12-18 | 分类于 go
字数统计: | 阅读时长 ≈
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
func PanicTrace(kb int) []byte {
s := []byte("/src/runtime/panic.go")
e := []byte("\ngoroutine ")
line := []byte("\n")
stack := make([]byte, kb<<10) //4KB
length := runtime.Stack(stack, true)
start := bytes.Index(stack, s)
stack = stack[start:length]
start = bytes.Index(stack, line) + 1
stack = stack[start:]
end := bytes.LastIndex(stack, line)
if end != -1 {
stack = stack[:end]
}
end = bytes.Index(stack, e)
if end != -1 {
stack = stack[:end]
}
stack = bytes.TrimRight(stack, "\n")
return stack
}
123…8
John Doe

John Doe

78 日志
26 分类
27 标签
© 2019 John Doe | Site words total count:
本站访客数:
|
博客全站共字
|
主题 — NexT.Mist v5.1.4