answer: while doing this i realized that i had a misspelling in my client setting within /etc/sysctl.conf file for: net.ipv4.ip_local_port_range
i am now able to connect 956,591 mqtt clients to my apollo server in 188sec.
more info: trying to isolate if this is an o/s connection limitation or a broker, i decided to write a simple client/server.
the server:
socket client = null; server = new serversocket(1884); while (true) { client = server.accept(); clients.add(client); }
the client:
while (true) { inetaddress clientiptobindto = getnextclientvip(); socket client = new socket(hostname, 1884, clientiptobindto, 0); clients.add(client); }
with 21 ips, i would expect 65535-1024*21 = 1354731 to be the boundary. in reality i am able to achieve 1231734
[root@ip ec2-user]# cat /proc/net/sockstat sockets: used 1231734 tcp: inuse 5 orphan 0 tw 0 alloc 1231307 mem 2 udp: inuse 4 mem 1 udplite: inuse 0 raw: inuse 0 frag: inuse 0 memory 0
so the socket/kernel/io stuff is worked out.
i am still unable to achieve this using any broker.
again just after my client/server test this is the kernel settings.
client:
[root@ip ec2-user]# sysctl -p net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 5242880 5242880 15242880 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 20000000 fs.nr_open = 20000000 [root@ip ec2-user]# cat /etc/security/limits.conf * soft nofile 2000000 * hard nofile 2000000 root soft nofile 2000000 root hard nofile 2000000
server:
[root@ ec2-user]# sysctl -p net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 5242880 5242880 5242880 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 20000000 fs.nr_open = 20000000 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_syn_backlog = 1000000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 65535 net.core.netdev_max_backlog = 1000000 net.core.optmem_max = 20480000