Hi vmrulz,
In your case you will have two NUMA nodes with 96GB of memory each. This can be confirmed by running ESXTOP and looking at the MEM stats as per below:
With your underlying hardware you are correct that a VM created with 16vCPU would create 2 vNUMA nodes. Now to try and explain your quandary about why vCPU counts higher than this end up with only one vNUMA node. It is because these calculations are based on physical cores not logical cores - since you are allocating more vCPU's than physical cores then vNUMA can end with less nodes than you are expecting.
By default the CPU scheduler tries to not schedule two vCPU's on the same core, i.e. on each of the logical processors. Since you are allocating more vCPU that physical cores this is not possible so hence the reason you have no vNUMA on this VM.
Now I'm not sure this is going to work or even if it will give any benefit however there is an advanced setting which tells the CPU scheduler to schedule vCPUs on the logical processors on the same core. It is possible by enabling this setting you may be able to get the large VM's to have multiple vNUMA nodes. See this link for the details. If it's going to work you would need to configure the VM with 2 sockets and x cores - I don't think it will but perhaps worth a try.
Does that make sense?