Hi, i was hoping someone could provide some insights on the techniques that DPITunnel uses, i have recognized the following methods currently by reading through the code/documentation (if there are other please do mention).
Split the Client hello at SNI field
Split and send the client hello in random order
Use TCP window size to fragment server hello
Send Fake client hello with a allowed SNI, but with a TTL not long enough reach the server
I understand the first method and the 4th one partially, if you could provide some comments as to why these methods exist and how they work i would be grateful.
DPI’s are designed for maximum bandwidth. Every extra computing is costly, it minimizes bandwidth.
DPI’s track TCP and UDP channels (src_ip:src_port - dst_ip:dst_port).
They maintain hashed tables similar to linux conntrack.
For every packet DPI must decide either to pass or drop it. Table lookup is fast. If it finds corresponding entry marked with “allowed” flag it immediately passes the packet and drops it if entry is marked with “denied” flag. Some DPIs use special network cards with FPGA to offload this task from the main CPU.
Otherwise it must process and analyze packet, possibly reassembling TCP stream. Its more resource costly.
To minimize resource usage DPI try to make a verdict as soon as possible and mark conntrack entry either with “allowed” or “denied” flag.
That’s why fake can work. DPI sees good data and allows TCP connection. Further packets are not analyzed becase connection is already marked.
Simple splitting works only against DPIs that cannot reassemble TCP streams. Sometimes its possible to mix split parts of real TCP data with fakes and break DPI’s reassembler. Another method is sending split parts in reverse order.
Because some DPIs also analyze certificate common name in ServerHello to extract hostname.
Not much can be done to modify server responses without help from the server. Windows size looks like the only possible method
Technically it should work but according to standard MSS tcp option should be sent only in SYN packet. So its permanent for a TCP connection. There’s no way to return to normal MSS and restore performance.
However I haven’t tried myself how different operating systems handle MSS change in subsequent packets.