Docker 搭建 MongoDB 分片集群

如何使用 Docker 搭建 MongoDB 分片集群。

一、编写 dockerfile。

1、在适当目录下创建 mongod 的 dockerfile。

1
$ vi dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#version 1.0
from ubuntu
#maintainer
maintainer hdx
#install
run apt-get clean
run apt-get update
run apt-get install -y vim
run apt-get install -y openssh-server
run mkdir -p /var/run/sshd
#open port 22 20001
expose 22
expose 20001
#cmd ["/usr/sbin/sshd","-D"]
run apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
run echo "deb http://repo.mongodb.org/apt/debian wheezy/mongodb-org/3.2 main" | sudo tee /etc/apt/sources.list.d/mongodb-org.list
#install mongodb
run apt-get update
run apt-get install -y mongodb-org
#create the mongodb data directory
run mkdir -p /data/db
entrypoint ["usr/bin/mongod"]

2、在适当目录下创建 mongod 的 dockerfile_mongos。

1
$ vi dockerfile_mongos
1
2
from ubuntu/mongo:latest
entrypoint ["usr/bin/mongos"]

二、通过 dockerfile 生成 image 镜像。

1
2
$ sudo docker build -t ubuntu/mongo:latest -<./dockerfile
$ sudo docker build -t ubuntu/mongos:latest -<./dockerfile_mongos

查看 image 的生成情况。

1
$ sudo docker images
1
2
3
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu/mongos latest 6de9188a6c9c 37 hours ago 459MB
ubuntu/mongo latest e2e287510648 37 hours ago 459MB

发现 image 已经生成,然后通过 image 来创建容器了。

三、通过 image 镜像构建 mongo 集群。

1、创建2个分片服务(shardsvr),每个 shardsvr 包含4个副本,其中1个主节点,2个从节点,1个仲裁节点。

-d 表示后台运行

-p 绑定 host 主机与 docker 的端口,第一个20001代表 host 主机端口,第二个代表对应的 docker 端口,绑定后可以通过调用 host 主机 ip:port 来访问 docker 启动的 mongoDB,我的主机端口为 10.0.0.116,所以下面的命令中IP都为这个。

注意:一定不能退在最后添加 —fork,使 mongo 服务后台运行,这样 docker 会任务无事可做而自杀!

1
2
3
4
5
6
7
$ sudo docker run -d -p 20001:20001 --name rs1_container1 ubuntu/mongo:latest --shardsvr --port 20001 --replSet rs1
$ sudo docker run -d -p 20002:20001 --name rs1_container2 ubuntu/mongo:latest --shardsvr --port 20001 --replSet rs1
$ sudo docker run -d -p 20003:20001 --name rs1_container3 ubuntu/mongo:latest --shardsvr --port 20001 --replSet rs1
$ sudo docker run -d -p 20004:20001 --name rs1_container4 ubuntu/mongo:latest --shardsvr --port 20001 --replSet rs1
1
2
3
4
5
6
7
$ sudo docker run -d -p 20011:20001 --name rs2_container1 ubuntu/mongo:latest --shardsvr --port 20001 --replSet rs2
$ sudo docker run -d -p 20012:20001 --name rs2_container2 ubuntu/mongo:latest --shardsvr --port 20001 --replSet rs2
$ sudo docker run -d -p 20013:20001 --name rs2_container3 ubuntu/mongo:latest --shardsvr --port 20001 --replSet rs2
$ sudo docker run -d -p 20014:20001 --name rs2_container4 ubuntu/mongo:latest --shardsvr --port 20001 --replSet rs2

2、创建2个配置服务(configsvr)。

1
2
3
$ sudo docker run -d -p 21001:20001 --name config_container1 ubuntu/mongo:latest --configsvr --dbpath /data/db --replSet crs --port 20001
$ sudo docker run -d -p 21002:20001 --name config_container2 ubuntu/mongo:latest --configsvr --dbpath /data/db --replSet crs --port 20001

注意: --dbpath /data/db 一定要指定,因为—configsvr 默认路径是 /data/configdb,如果找不到会报错,然后 docker 直接自杀!

--replSet crs 从 mongoDB 版本3.2以后支持 configsvr 的副本集。

3、启动2个路由服务(mongos)。

1
2
3
$ sudo docker run -d -p 22001:20001 --name mongos_container1 ubuntu/mongos:latest --configdb crs/10.0.0.116:21001,10.0.0.116:21002 --port 20001
$ sudo docker run -d -p 22002:20001 --name mongos_container2 ubuntu/mongos:latest --configdb crs/10.0.0.116:21001,10.0.0.116:21002 --port 20001

注意: crs/10.0.0.116:21001,10.0.0.116:21002 mongodb 版本3.2以后通过这种形式指定 configdb,否则报错,ip 要写宿主机的实际 ip。

4、查看当前 docker 服务启动情况。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3bcc8c6ae02 ubuntu/mongos:latest "usr/bin/mongos --..." 17 hours ago Up 5 seconds 22/tcp, 0.0.0.0:22002->20001/tcp mongos_container2
532a28409c5d ubuntu/mongos:latest "usr/bin/mongos --..." 17 hours ago Up 8 seconds 22/tcp, 0.0.0.0:22001->20001/tcp mongos_container1
a071366a458d ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 14 seconds 22/tcp, 0.0.0.0:21002->20001/tcp config_container2
88a34cbe67a6 ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 18 seconds 22/tcp, 0.0.0.0:21001->20001/tcp config_container1
8cec58a3fdc4 ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 26 seconds 22/tcp, 0.0.0.0:20014->20001/tcp rs2_container4
910881c88d92 ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 29 seconds 22/tcp, 0.0.0.0:20013->20001/tcp rs2_container3
f69972ae2b0a ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 31 seconds 22/tcp, 0.0.0.0:20012->20001/tcp rs2_container2
c6e8cece4ef1 ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 35 seconds 22/tcp, 0.0.0.0:20011->20001/tcp rs2_container1
d93dba9c36a1 ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 48 seconds 22/tcp, 0.0.0.0:20004->20001/tcp rs1_container4
f49ebcbfec7d ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 51 seconds 22/tcp, 0.0.0.0:20003->20001/tcp rs1_container3
3f683b8848b3 ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 54 seconds 22/tcp, 0.0.0.0:20002->20001/tcp rs1_container2
0f0792844d9d ubuntu/mongo:latest "usr/bin/mongod --..." 17 hours ago Up 56 seconds 22/tcp, 0.0.0.0:20001->20001/tcp rs1_container1

发现分片、配置服务、和路由服务都启动起来了。

四、mongo基本操作。

服务启动起来了,但是服务都是互相独立的,所以,接下来我们将这些服务器串联起来。

1、初始化分片 rs1 副本集。

任意选择 rs1 分片的一个副本,连接并进行配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 主机上如果安装了 mongoDB,使用如下命令:
yangs-MacBook-Air:~ yang$ mongo 10.0.0.116:20001
# 这个是利用 docker 连接 mongoDB 命令,连接时二者选其一即可
yangs-MacBook-Air:~ yang$ docker exec -it mongos_container2 mongo 10.0.0.116:20001
# 切换数据库
use admin
# 写配置文件
config = {_id:"rs1",members:[ {_id:0,host:"10.0.0.116:20001"}, {_id:1,host:"10.0.0.116:20002"}, {_id:2,host:"10.0.0.116:20003"},{_id:3,host:"10.0.0.116:20004",arbiterOnly:true}] }
# 初始化副本集
rs.initiate(config)
# 查看副本集状态
rs.status()

结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
0f0792844d9d:20001(mongod-3.2.15)[PRIMARY:rs1] test> rs.status()
{
"set": "rs1",
"date": ISODate("2017-07-27T02:00:03.335Z"),
"myState": 1,
"term": NumberLong("4"),
"heartbeatIntervalMillis": NumberLong("2000"),
"members": [
{
"_id": 0,
"name": "10.0.0.116:20001",
"health": 1,
"state": 1,
"stateStr": "PRIMARY",
"uptime": 442,
"optime": {
"ts": Timestamp(1501120374, 1),
"t": NumberLong("4")
},
"optimeDate": ISODate("2017-07-27T01:52:54Z"),
"electionTime": Timestamp(1501120373, 1),
"electionDate": ISODate("2017-07-27T01:52:53Z"),
"configVersion": 1,
"self": true
},
{
"_id": 1,
"name": "10.0.0.116:20002",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 436,
"optime": {
"ts": Timestamp(1501120374, 1),
"t": NumberLong("4")
},
"optimeDate": ISODate("2017-07-27T01:52:54Z"),
"lastHeartbeat": ISODate("2017-07-27T02:00:03.139Z"),
"lastHeartbeatRecv": ISODate("2017-07-27T02:00:03.139Z"),
"pingMs": NumberLong("3"),
"syncingTo": "10.0.0.116:20003",
"configVersion": 1
},
{
"_id": 2,
"name": "10.0.0.116:20003",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 436,
"optime": {
"ts": Timestamp(1501120374, 1),
"t": NumberLong("4")
},
"optimeDate": ISODate("2017-07-27T01:52:54Z"),
"lastHeartbeat": ISODate("2017-07-27T02:00:03.136Z"),
"lastHeartbeatRecv": ISODate("2017-07-27T02:00:01.665Z"),
"pingMs": NumberLong("1"),
"syncingTo": "10.0.0.116:20001",
"configVersion": 1
},
{
"_id": 3,
"name": "10.0.0.116:20004",
"health": 1,
"state": 7,
"stateStr": "ARBITER",
"uptime": 431,
"lastHeartbeat": ISODate("2017-07-27T02:00:03.136Z"),
"lastHeartbeatRecv": ISODate("2017-07-27T01:59:59.780Z"),
"pingMs": NumberLong("1"),
"configVersion": 1
}
],
"ok": 1
}

2、初始化分片 rs2 副本集。

1
2
3
4
5
6
7
8
9
10
# 任意选择 rs2 分片的一个副本
yangs-MacBook-Air:~ yang$ mongo 10.0.0.116:20011
# 切换数据库
use admin
# 写配置文件
config = {_id:"rs2",members:[ {_id:0,host:"10.0.0.116:20011"}, {_id:1,host:"10.0.0.116:20012"}, {_id:2,host:"10.0.0.116:20013"},{_id:3,host:"10.0.0.116:20014",arbiterOnly:true}] }
# 初始化副本集
rs.initiate(config)
# 查看副本集状态
rs.status()

3、初始化配置服务副本集。

1
2
3
4
5
6
7
8
9
10
# 任意选择 crs 分片的一个副本
yangs-MacBook-Air:~ yang$ mongo 10.0.0.116:21001
# 切换数据库
use admin
# 写配置文件
config = {_id:"crs", configsvr:true, members:[ {_id:0,host:"10.0.0.116:21001"}, {_id:1,host:"10.0.0.116:21002"} ] }
# 初始化副本集
rs.initiate(config)
# 查看副本集状态
rs.status()

4、通过 mongos 添加分片关系到 configsvr。

1
2
3
4
yangs-MacBook-Air:~ yang$ mongo 10.0.0.116:22001
use admin
db.runCommand({addshard:"rs1/10.0.0.116:20001,10.0.0.116:20002,10.0.0.116:20003,10.0.0.116:20004"})
db.runCommand({addshard:"rs2/10.0.0.116:20011,10.0.0.116:20012,10.0.0.116:20013,10.0.0.116:20014"})

查询结果如下:仲裁节点没有显示。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
532a28409c5d:20001(mongos-3.2.15)[mongos] admin> db.runCommand({listshards:1})
{
"shards": [
{
"_id": "rs1",
"host": "rs1/10.0.0.116:20001,10.0.0.116:20002,10.0.0.116:20003"
},
{
"_id": "rs2",
"host": "rs2/10.0.0.116:20011,10.0.0.116:20012,10.0.0.116:20013"
}
],
"ok": 1
}

5、设置数据库、集合分片。

设置索引, 为了数据被均衡分发,我们创建散列索引

1
2
3
# 在 mongos 中
use mydb
db.person.ensureIndex({id: "hashed"})

设置集合分片

1
2
3
# 在 mongos 中
db.runCommand({enablesharding:"mydb"})
db.runCommand({shardcollection:"mydb.person", key:{id:"hashed"}})

设置数据库 mydb、mydb 中 person 集合应用分片,片键为 person 集合的 id 字段。

说明:并不是数据库中所有集合都分片,只有设置了shardcollection 才分片,因为不是所有的集合都需要分片。

6、测试分片结果。

1
2
3
use mydb
for (i =0; i<5000;i++){
db.person.save({id:i, company:"smartestee"})}

测试结果如下:发现已经成功分片,rs1 和 rs2 较为均匀,这和片键有关系,具体情况请查询如何选择片键。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
532a28409c5d:20001(mongos-3.2.15)[mongos] mydb> db.person.stats()
{
"sharded": true,
"capped": false,
"ns": "mydb.person",
"count": 5000,
"size": 277766,
"storageSize": 196608,
"totalIndexSize": 323584,
"indexSizes": {
"_id_": 122880,
"id_hashed": 200704
},
"avgObjSize": 55.5532,
"nindexes": 2,
"nchunks": 4,
"shards": {
"rs1": {
"ns": "mydb.person",
"count": 2480,
"size": 137486,
"avgObjSize": 55,
"storageSize": 94208,
"capped": false,
"wiredTiger": {
"metadata": {
"formatVersion": 1
},
"creationString": "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
"type": "file",
"uri": "statistics:table:collection-16-4938398221887072838",
"LSM": {
"bloom filter false positives": 0,
"bloom filter hits": 0,
"bloom filter misses": 0,
"bloom filter pages evicted from cache": 0,
"bloom filter pages read into cache": 0,
"bloom filters in the LSM tree": 0,
"chunks in the LSM tree": 0,
"highest merge generation in the LSM tree": 0,
"queries that could have benefited from a Bloom filter that did not exist": 0,
"sleep for LSM checkpoint throttle": 0,
"sleep for LSM merge throttle": 0,
"total size of bloom filters": 0
},
"block-manager": {
"allocations requiring file extension": 0,
"blocks allocated": 0,
"blocks freed": 0,
"checkpoint size": 40960,
"file allocation unit size": 4096,
"file bytes available for reuse": 36864,
"file magic number": 120897,
"file major version number": 1,
"file size in bytes": 94208,
"minor version number": 0
},
"btree": {
"btree checkpoint generation": 4,
"column-store fixed-size leaf pages": 0,
"column-store internal pages": 0,
"column-store variable-size RLE encoded values": 0,
"column-store variable-size deleted values": 0,
"column-store variable-size leaf pages": 0,
"fixed-record size": 0,
"maximum internal page key size": 368,
"maximum internal page size": 4096,
"maximum leaf page key size": 2867,
"maximum leaf page size": 32768,
"maximum leaf page value size": 67108864,
"maximum tree depth": 0,
"number of key/value pairs": 0,
"overflow pages": 0,
"pages rewritten by compaction": 0,
"row-store internal pages": 0,
"row-store leaf pages": 0
},
"cache": {
"bytes currently in the cache": 21314,
"bytes read into cache": 16816,
"bytes written from cache": 0,
"checkpoint blocked page eviction": 0,
"data source pages selected for eviction unable to be evicted": 0,
"hazard pointer blocked page eviction": 0,
"in-memory page passed criteria to be split": 0,
"in-memory page splits": 0,
"internal pages evicted": 0,
"internal pages split during eviction": 0,
"leaf pages split during eviction": 0,
"modified pages evicted": 0,
"overflow pages read into cache": 0,
"overflow values cached in memory": 0,
"page split during eviction deepened the tree": 0,
"page written requiring lookaside records": 0,
"pages read into cache": 2,
"pages read into cache requiring lookaside entries": 0,
"pages requested from the cache": 1,
"pages written from cache": 0,
"pages written requiring in-memory restoration": 0,
"tracked dirty bytes in the cache": 0,
"unmodified pages evicted": 0
},
"cache_walk": {
"Average difference between current eviction generation when the page was last considered": 0,
"Average on-disk page image size seen": 0,
"Clean pages currently in cache": 0,
"Current eviction generation": 0,
"Dirty pages currently in cache": 0,
"Entries in the root page": 0,
"Internal pages currently in cache": 0,
"Leaf pages currently in cache": 0,
"Maximum difference between current eviction generation when the page was last considered": 0,
"Maximum page size seen": 0,
"Minimum on-disk page image size seen": 0,
"On-disk page image sizes smaller than a single allocation unit": 0,
"Pages created in memory and never written": 0,
"Pages currently queued for eviction": 0,
"Pages that could not be queued for eviction": 0,
"Refs skipped during cache traversal": 0,
"Size of the root page": 0,
"Total number of pages currently in cache": 0
},
"compression": {
"compressed pages read": 1,
"compressed pages written": 0,
"page written failed to compress": 0,
"page written was too small to compress": 0,
"raw compression call failed, additional data available": 0,
"raw compression call failed, no additional data available": 0,
"raw compression call succeeded": 0
},
"cursor": {
"bulk-loaded cursor-insert calls": 0,
"create calls": 1,
"cursor-insert key and value bytes inserted": 0,
"cursor-remove key bytes removed": 0,
"cursor-update value bytes updated": 0,
"insert calls": 0,
"next calls": 0,
"prev calls": 1,
"remove calls": 0,
"reset calls": 1,
"restarted searches": 0,
"search calls": 0,
"search near calls": 0,
"truncate calls": 0,
"update calls": 0
},
"reconciliation": {
"dictionary matches": 0,
"fast-path pages deleted": 0,
"internal page key bytes discarded using suffix compression": 0,
"internal page multi-block writes": 0,
"internal-page overflow keys": 0,
"leaf page key bytes discarded using prefix compression": 0,
"leaf page multi-block writes": 0,
"leaf-page overflow keys": 0,
"maximum blocks required for a page": 0,
"overflow values written": 0,
"page checksum matches": 0,
"page reconciliation calls": 0,
"page reconciliation calls for eviction": 0,
"pages deleted": 0
},
"session": {
"object compaction": 0,
"open cursor count": 1
},
"transaction": {
"update conflicts": 0
}
},
"nindexes": 2,
"totalIndexSize": 159744,
"indexSizes": {
"_id_": 61440,
"id_hashed": 98304
},
"ok": 1
},
"rs2": {
"ns": "mydb.person",
"count": 2520,
"size": 140280,
"avgObjSize": 55,
"storageSize": 102400,
"capped": false,
"wiredTiger": {
"metadata": {
"formatVersion": 1
},
"creationString": "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
"type": "file",
"uri": "statistics:table:collection-16--4365114664988837133",
"LSM": {
"bloom filter false positives": 0,
"bloom filter hits": 0,
"bloom filter misses": 0,
"bloom filter pages evicted from cache": 0,
"bloom filter pages read into cache": 0,
"bloom filters in the LSM tree": 0,
"chunks in the LSM tree": 0,
"highest merge generation in the LSM tree": 0,
"queries that could have benefited from a Bloom filter that did not exist": 0,
"sleep for LSM checkpoint throttle": 0,
"sleep for LSM merge throttle": 0,
"total size of bloom filters": 0
},
"block-manager": {
"allocations requiring file extension": 0,
"blocks allocated": 0,
"blocks freed": 0,
"checkpoint size": 45056,
"file allocation unit size": 4096,
"file bytes available for reuse": 40960,
"file magic number": 120897,
"file major version number": 1,
"file size in bytes": 102400,
"minor version number": 0
},
"btree": {
"btree checkpoint generation": 4,
"column-store fixed-size leaf pages": 0,
"column-store internal pages": 0,
"column-store variable-size RLE encoded values": 0,
"column-store variable-size deleted values": 0,
"column-store variable-size leaf pages": 0,
"fixed-record size": 0,
"maximum internal page key size": 368,
"maximum internal page size": 4096,
"maximum leaf page key size": 2867,
"maximum leaf page size": 32768,
"maximum leaf page value size": 67108864,
"maximum tree depth": 0,
"number of key/value pairs": 0,
"overflow pages": 0,
"pages rewritten by compaction": 0,
"row-store internal pages": 0,
"row-store leaf pages": 0
},
"cache": {
"bytes currently in the cache": 24852,
"bytes read into cache": 19676,
"bytes written from cache": 0,
"checkpoint blocked page eviction": 0,
"data source pages selected for eviction unable to be evicted": 0,
"hazard pointer blocked page eviction": 0,
"in-memory page passed criteria to be split": 0,
"in-memory page splits": 0,
"internal pages evicted": 0,
"internal pages split during eviction": 0,
"leaf pages split during eviction": 0,
"modified pages evicted": 0,
"overflow pages read into cache": 0,
"overflow values cached in memory": 0,
"page split during eviction deepened the tree": 0,
"page written requiring lookaside records": 0,
"pages read into cache": 2,
"pages read into cache requiring lookaside entries": 0,
"pages requested from the cache": 1,
"pages written from cache": 0,
"pages written requiring in-memory restoration": 0,
"tracked dirty bytes in the cache": 0,
"unmodified pages evicted": 0
},
"cache_walk": {
"Average difference between current eviction generation when the page was last considered": 0,
"Average on-disk page image size seen": 0,
"Clean pages currently in cache": 0,
"Current eviction generation": 0,
"Dirty pages currently in cache": 0,
"Entries in the root page": 0,
"Internal pages currently in cache": 0,
"Leaf pages currently in cache": 0,
"Maximum difference between current eviction generation when the page was last considered": 0,
"Maximum page size seen": 0,
"Minimum on-disk page image size seen": 0,
"On-disk page image sizes smaller than a single allocation unit": 0,
"Pages created in memory and never written": 0,
"Pages currently queued for eviction": 0,
"Pages that could not be queued for eviction": 0,
"Refs skipped during cache traversal": 0,
"Size of the root page": 0,
"Total number of pages currently in cache": 0
},
"compression": {
"compressed pages read": 1,
"compressed pages written": 0,
"page written failed to compress": 0,
"page written was too small to compress": 0,
"raw compression call failed, additional data available": 0,
"raw compression call failed, no additional data available": 0,
"raw compression call succeeded": 0
},
"cursor": {
"bulk-loaded cursor-insert calls": 0,
"create calls": 1,
"cursor-insert key and value bytes inserted": 0,
"cursor-remove key bytes removed": 0,
"cursor-update value bytes updated": 0,
"insert calls": 0,
"next calls": 0,
"prev calls": 1,
"remove calls": 0,
"reset calls": 1,
"restarted searches": 0,
"search calls": 0,
"search near calls": 0,
"truncate calls": 0,
"update calls": 0
},
"reconciliation": {
"dictionary matches": 0,
"fast-path pages deleted": 0,
"internal page key bytes discarded using suffix compression": 0,
"internal page multi-block writes": 0,
"internal-page overflow keys": 0,
"leaf page key bytes discarded using prefix compression": 0,
"leaf page multi-block writes": 0,
"leaf-page overflow keys": 0,
"maximum blocks required for a page": 0,
"overflow values written": 0,
"page checksum matches": 0,
"page reconciliation calls": 0,
"page reconciliation calls for eviction": 0,
"pages deleted": 0
},
"session": {
"object compaction": 0,
"open cursor count": 1
},
"transaction": {
"update conflicts": 0
}
},
"nindexes": 2,
"totalIndexSize": 163840,
"indexSizes": {
"_id_": 61440,
"id_hashed": 102400
},
"ok": 1
}
},
"ok": 1
}