My sql key partition and mongodb test

Preview:

Citation preview

MySQLMySQLMySQLMySQL keykeykeykey partitionpartitionpartitionpartition andandandand MongoDBMongoDBMongoDBMongoDB TESTTESTTESTTEST

对于业务的激活码需求做了一次关于 mysql,mongodb 的比对.mysql 分为 normal,key partition 数量分别是1亿和10亿数据,mysql 采用直接访问 PK 键,partition key

为 PK,mysql table size 为90G,mongodb table size 为157G。

[liuyang@yhdem ~]$ cat /proc/cpuinfo |grep processor |wc -l

24

[liuyang@yhdem ~]$ cat /etc/issue

Oracle Linux Server release 5.8

Kernel \r on an \m

mysqlmysqlmysqlmysql evn:evn:evn:evn:

mysql> select version();

+-----------+

| version() |

+-----------+

| 5.5.25a |

+-----------+

1 row in set (0.00 sec)

log_bin[OFF] innodb_flush_log_at_trx_commit [2] query_cache_type[OFF]

max_connect_errors[10] max_connections[214] max_user_connections[0]

sync_binlog[0] table_definition_cache[400]

table_open_cache[400] thread_cache_size[8] open_files_limit[30000]

innodb_adaptive_flushing[ON] innodb_adaptive_hash_index[ON]

innodb_buffer_pool_size[30.234375G]

innodb_file_per_table[ON] innodb_flush_log_at_trx_commit[2] innodb_flush_method[]

innodb_io_capacity[200] innodb_lock_wait_timeout[100]

innodb_log_buffer_size[128M]

innodb_log_file_size[200M] innodb_log_files_in_group[2]

innodb_max_dirty_pages_pct[75]

innodb_open_files[1600] innodb_read_io_threads[4] innodb_thread_concurrency[0]

innodb_write_io_threads[4]

以下图片均为 QPS 统计,TPS 测试暂时没有做

nononono partitionpartitionpartitionpartition tabletabletabletable withwithwithwith oneoneoneone billionbillionbillionbillion rowsrowsrowsrows ––––>>>> smallsmallsmallsmall randomrandomrandomrandom selectselectselectselect bybybyby pkpkpkpk

xDiskName Busy Read WriteKB|0 |25 |50 |75 100|xsda 1% 2.0 35.9|> |xsda1 0% 0.0 0.0|> |xsda2 0% 0.0 0.0|> |xsda3 0% 0.0 0.0|> |xsda4 0% 0.0 0.0|>disk busy not available |xsda5 0% 0.0 0.0|> |xsda6 1% 2.0 35.9|> |xsdb 0% 0.0 55.9|> |xsdb1 0% 0.0 55.9|> |xTotals Read-MB/s=0.0 Writes-MB/s=0.2 Transfers/sec=18.0

partitionpartitionpartitionpartition tabletabletabletable withwithwithwith oneoneoneone billionbillionbillionbillion rowsrowsrowsrows ––––>>>> smallsmallsmallsmall randomrandomrandomrandom selectselectselectselect bybybyby pkpkpkpk

xDiskName Busy Read WriteKB|0 |25 |50 |75 100|xsda 0% 0.0 8.0|> |xsda1 0% 0.0 0.0|> |xsda2 0% 0.0 8.0|> |xsda3 0% 0.0 0.0|> |xsda4 0% 0.0 0.0|>disk busy not available |xsda5 0% 0.0 0.0|> |xsda6 0% 0.0 0.0|> |xsdb 0% 0.0 201.5| > |xsdb1 0% 0.0 201.5|W > |xTotals Read-MB/s=0.0 Writes-MB/s=0.4 Transfers/sec=46.9

nononono partitionpartitionpartitionpartition tabletabletabletable withwithwithwith oneoneoneone billionbillionbillionbillion rowsrowsrowsrows ––––>>>> fullfullfullfull randomrandomrandomrandom selectselectselectselect bybybyby pkpkpkpk

xDiskName Busy Read WriteMB|0 |25 |50 |75 100|xsda 0% 0.0 0.0| > |xsda1 0% 0.0 0.0|> |xsda2 0% 0.0 0.0|> |xsda3 0% 0.0 0.0|> |xsda4 0% 0.0 0.0|>disk busy not available |xsda5 0% 0.0 0.0|> |xsda6 0% 0.0 0.0| > |xsdb 100% 86.8 0.2|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR>xsdb1 100% 86.8 0.2|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR>xTotals Read-MB/s=173.6 Writes-MB/s=0.4 Transfers/sec=6448.1

partitionpartitionpartitionpartition tabletabletabletable withwithwithwith oneoneoneone billionbillionbillionbillion rowsrowsrowsrows ––––>>>> fullfullfullfull randomrandomrandomrandom selectselectselectselect bybybyby pkpkpkpk

xDiskName Busy Read WriteMB|0 |25 |50 |75 100|xsda 0% 0.0 0.0| > |xsda1 0% 0.0 0.0|> |xsda2 0% 0.0 0.0| > |xsda3 0% 0.0 0.0|> |xsda4 0% 0.0 0.0|>disk busy not available |xsda5 0% 0.0 0.0|> |xsda6 0% 0.0 0.0| > |xsdb 100% 89.6 0.2|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR>xsdb1 100% 89.6 0.2| >xTotals Read-MB/s=179.2 Writes-MB/s=0.3 Transfers/sec=6539.3

nononono partitionpartitionpartitionpartition tabletabletabletable withwithwithwith 100100100100 millionmillionmillionmillion rowsrowsrowsrows ––––>>>> fullfullfullfull randomrandomrandomrandom selectselectselectselect bybybyby pkpkpkpk

下面基于下面基于下面基于下面基于 mongodbmongodbmongodbmongodb 的的的的 TEST.TEST.TEST.TEST.同样为同样为同样为同样为 10101010 亿的表亿的表亿的表亿的表,157G.,157G.,157G.,157G.

[root@db-13 tmp]# mongoMongoDB shell version: 2.0.8connecting to: test> db.foo.totalSize();157875838416> db.foo.find().count();1000000000

第一次第一次第一次第一次 使用使用使用使用128G128G128G128G 满额内存满额内存满额内存满额内存 16thread,1016thread,1016thread,1016thread,10亿亿亿亿 randomrandomrandomrandom query:query:query:query:

[root@db-13 tmp]# mongo test ./mongodb_benchmark_query.js

MongoDB shell version: 2.0.8

connecting to: test

threads: 16 queries/sec: 126151.69666666667

第二次第二次第二次第二次 使用使用使用使用128G128G128G128G 内存内存内存内存 24242424 thread,10thread,10thread,10thread,10亿中的前亿中的前亿中的前亿中的前1111亿数据亿数据亿数据亿数据 randomrandomrandomrandom query:query:query:query:

[root@db-13 tmp]# mongo test ./mongodb_benchmark_query.js

MongoDB shell version: 2.0.8

connecting to: test

threads: 24 queries/sec: 166527.42333333334

第三次第三次第三次第三次 使用使用使用使用 mysqlmysqlmysqlmysql 用户启动用户启动用户启动用户启动 mongomongomongomongo 限制限制限制限制 mysqlmysqlmysqlmysql 用户的用户的用户的用户的 memmemmemmem 为为为为24G24G24G24G 24242424 threadthreadthreadthread ,,,, 10101010亿中的前亿中的前亿中的前亿中的前1111亿数据亿数据亿数据亿数据 randomrandomrandomrandom queryqueryqueryquery ::::

[mysql@db-13 ~]$ ulimit -a

core file size (blocks, -c) 0

data seg size (kbytes, -d) unlimited

scheduling priority (-e) 0

file size (blocks, -f) unlimited

pending signals (-i) 1052672

max locked memory (kbytes, -l) 26055452

max memory size (kbytes, -m) 26055452

open files (-n) 131072

pipe size (512 bytes, -p) 8

POSIX message queues (bytes, -q) 819200

real-time priority (-r) 0

stack size (kbytes, -s) 10240

cpu time (seconds, -t) unlimited

max user processes (-u) unlimited

virtual memory (kbytes, -v) unlimited

file locks (-x) unlimited

[mysql@db-13 tmp]$ mongo test ./mongodb_benchmark_query.js

MongoDB shell version: 2.0.8

connecting to: test

threads: 24 queries/sec: 161358.03333333333

第四次第四次第四次第四次 使用使用使用使用 mysqlmysqlmysqlmysql 用户启动用户启动用户启动用户启动 mongomongomongomongo 限制限制限制限制 mysqlmysqlmysqlmysql 用户的用户的用户的用户的 memmemmemmem 为为为为24G24G24G24G 24242424 threadthreadthreadthread ,,,, 10101010亿亿亿亿 randomrandomrandomrandom queryqueryqueryquery ::::

[mysql@db-13 tmp]$ mongo test ./mongodb_benchmark_query.js

MongoDB shell version: 2.0.8

connecting to: test

threads: 24 queries/sec: 2549.2 ----------------------> 这里出现了物理 IO读写

----- 以下是不用缓存磁盘随机 IOIOIOIO下的测试结果:

mongodb初始只用了13M内存

测试结果:

—提供查询脚本

ops = [{op:"findOne", ns:"test.foo", query: {_id : {"#RAND_INT": [ 1 , 100000000 ] } }}]

x=24

{

res = benchRun( {

parallel : x ,

seconds : 60 ,

ops : ops

} );

print("threads: "+ x +"\t queries/sec: "+ res.query );

}

END :

10亿 normal table 对于1亿 normal table 在内存基于 PK 的访问没有衰减,10亿的 partition table 对于 10亿的 normal table 在内存中衰减了2/3,10亿的 partition

table 对于10亿的 normal table 在 full table out of memory 的情况下 性能有所提升 (另外注意激活码基本只会被访问1次)对于 mongodb 来说,这种业务模式需求

完全可以搞定,在内存充足的情况下 QPS 达到了16W+/s,但是在内存不足的情况下,暴跌至2549,在 mem cache 完全清除的情况下,QPS 为149/s,基于这次业务需求为一

张 very large table (暂估500GB),因此决定使用 mysql 而不是 mongodb.