I recently bought a new server based on APU4C4 from TekLager. They were kind enough to deliver it pre-installed with VMWare ESXi 6.7 (even though that OS option wasn't listed on their web site).
The server is very nice. It is small, has 4x gigabit nics, a 16GB SSD and 4GB ram. Priced at ~280EUR it is very affordable.

However, ESXi creates a 4GB scratch partition by default, and a 2.5GB partition for extra diagnostics (in additional to the standard 100MB diagnostics partition). This left me with 7.4GB for the datastore.
ls -lh /dev/disks/<br> total 31156217<br> -rw------- 1 root root 14.9G Mar 10 09:41 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____<br> -rw------- 1 root root 4.0M Mar 10 09:41 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:1<br> -rw------- 1 root root 7.4G Mar 10 09:41 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:10<br> -rw------- 1 root root 4.0G Mar 10 09:41 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:2<br> -rw------- 1 root root 250.0M Mar 10 09:41 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:5<br> -rw------- 1 root root 250.0M Mar 10 09:41 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:6<br> -rw------- 1 root root 110.0M Mar 10 09:41 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:7<br> -rw------- 1 root root 286.0M Mar 10 09:41 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:8<br> -rw------- 1 root root 2.5G Mar 10 09:41 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:9
- Partition 9 is the 2.5GB extra diagnostics disk
- Partition 10 is the datastore
- Partition 2 is the scratch disk, where vmware stores log files. If this partition is not available, vmware will create a 512MB ramdisk for the log files. This would mean wasting precious ram, and that the log files would be wiped at reboot.
First, I decided to reclaim the space of the extra diagnostics disk. Start by setting the host in maintenance mode (Web UI -> Host -> Actions -> Enter maintenance mode)

Then login through ssh and perform the following commands (check the output to make sure you adapt the commands to your system):
esxcli system coredump partition list<br> Name Path Active Configured<br> -------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------- ------ ----------<br> t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:7 /vmfs/devices/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:7 false false<br> t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:9 /vmfs/devices/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:9 true true<br> esxcli system coredump partition set --partition=t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:7<br> esxcli system coredump partition list<br> Name Path Active Configured<br> -------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------- ------ ----------<br> t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:7 /vmfs/devices/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:7 true true<br> t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:9 /vmfs/devices/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:9 false false
Notice that the active and configured partition has changed.
Next, delete the partition
partedUtil getptbl /vmfs/devices/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____<br> gpt<br> 1946 255 63 31277232<br> 1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128<br> 5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0<br> 6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0<br> 7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0<br> 8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0<br> 9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0<br> 2 7086080 15472639 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0<br> 10 15472640 31035392 AA31E02A400F11DB9590000C2911D1B8 vmfs 0 partedUtil delete /vmfs/devices/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____ 9 partedUtil getptbl /vmfs/devices/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____ gpt 1946 255 63 31277232 1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128 5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0 6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0 7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0 8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0 2 7086080 15472639 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0 10 15472640 31035392 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
With partition 9 removed, reboot the host. After the reboot, verify that partition 9 is no longer listed:
esxcli system coredump partition list<br> Name Path Active Configured<br> -------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------- ------ ----------<br> t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:7 /vmfs/devices/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:7 true true
Next, reconfigure the scratch location. Locate the existing datastore:
esxcli storage filesystem list<br> Mount Point Volume Name UUID Mounted Type Size Free<br> ------------------------------------------------- ----------- ----------------------------------- ------- ------ ---------- ----------<br> /vmfs/volumes/5e5aad9a-e8c75bb6-515f-000db953f294 data 5e5aad9a-e8c75bb6-515f-000db953f294 true VMFS-6 7784628224 1199570944<br> /vmfs/volumes/355f647d-17552dfa-58c5-45971a489ec4 355f647d-17552dfa-58c5-45971a489ec4 true vfat 261853184 113840128<br> /vmfs/volumes/5e5a8421-6e4a1b2c-c901-000db95522a4 5e5a8421-6e4a1b2c-c901-000db95522a4 true vfat 4293591040 4185522176<br> /vmfs/volumes/51d1c478-254c0789-dee3-8818e77b2ff5 51d1c478-254c0789-dee3-8818e77b2ff5 true vfat 261853184 113827840<br> /vmfs/volumes/5e5a8419-89dba2c4-8963-000db95522a4 5e5a8419-89dba2c4-8963-000db95522a4 true vfat 299712512 80486400
Here we see that my datastore is on /vmfs/volumes/5e5aad9a-e8c75bb6-515f-000db953f294
Create a scratch folder on your existing datastore:
mkdir /vmfs/volumes/5e5aad9a-e8c75bb6-515f-000db953f294/scratch
In the Web UI, go to Host -> Manage -> System -> Advanced Settings. Enter scratch into the search box and press enter:

Change the location to the newly created folder on your existing datastore:

Reboot the host for the changes to take effect. You can now delete the scratch partition and create a new partition spanning the area where partition 2 and 9 were previously:
partedUtil delete /dev/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____ 2<br> partedUtil add /dev/disks/t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____ gpt "9 1843200 15472639 AA31E02A400F11DB9590000C2911D1B8 0"<br> gpt<br> 1946 255 63 31277232<br> 1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B 128<br> 5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 0<br> 6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 0<br> 7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 0<br> 8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 0<br> 10 15472640 31035392 AA31E02A400F11DB9590000C2911D1B8 0<br> 9 1843200 15472639 AA31E02A400F11DB9590000C2911D1B8 0 ls -lah /dev/disks/ total 31156218 drwxr-xr-x 2 root root 512 Mar 10 12:21 . drwxr-xr-x 16 root root 512 Mar 10 12:21 .. -rw------- 1 root root 14.9G Mar 10 12:21 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____ -rw------- 1 root root 4.0M Mar 10 12:21 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:1 -rw------- 1 root root 7.4G Mar 10 12:21 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:10 -rw------- 1 root root 250.0M Mar 10 12:21 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:5 -rw------- 1 root root 250.0M Mar 10 12:21 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:6 -rw------- 1 root root 110.0M Mar 10 12:21 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:7 -rw------- 1 root root 286.0M Mar 10 12:21 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:8 -rw------- 1 root root 6.5G Mar 10 12:21 t10.ATA_____Hoodisk_SSD_____________________________KADTC7A11201576_____:9
We have now reclaimed 6.5GB as partition 9. Let's add it to our existing datastore.
In the web UI, go to Storage -> data and click Increase capacity".

Select "Expand an existing VMFS datastore extent"

Select the empty partition and finish the guide.


You have now reclaimed 6.5GB free space. Enjoy!
Resources that were useful in my quest:
https://www.virten.net/2016/11/usb-devices-as-vmfs-datastore-in-vsphere-esxi-6-5/
https://kb.vmware.com/s/article/1036609
https://gist.github.com/csamsel/eb71211bfdaa356c55e62344201354fd
https://kb.vmware.com/s/article/2004299
http://www.vmwarearena.com/how-to-configure-scratch-partition-in-vmware-esxi-using-web-client/
http://www.vmwarearena.com/what-i-wish-everyone-knew-about-esxi-partition/
https://kb.vmware.com/s/article/1033696