• Both systems are using Windows Server 2012 R2 Standard and Hyper-V (with same very similar CPU). 
  • Both systems are using a core license.
  • CreateGUID is not the bottleneck for sure. This is something I have checked very early. Removing the write to the global (keeping CreateGUID) will allow CPU to reach 100%. The effect of using a GUID (versus a incremental ID) is to spread out the global node writes, which might affect performance. But that not the explanation, because then both systems should be affected.

I have edited OP to reflect those details.
I have tested this on 4 systems (all very similar), and only one behave like that (slow DB writes).

The database is on a SSD/NVMe drive. The impact of random access vs sequential on SSD is less than HDD but it's not neglectable. Run a CrystalDiskMark benchmark on any SSD and you will find out that the random access is slower than sequential one. 

This image summarize it well : 
r/computing - SATA HDD vs SATA SSD vs SATA NVMe CrystalDiskMark results
 

Why I want to defragment the database: I found out that the length of the I/O write queue on the database drive goes quite high (up to 35). The drives holding the journals and WIJ have much lower maximum write queue length (it never get higher than 2) while the amount of data being written is the same (the peaks are about 400MB/s). The difference is database is random access while WIJ and journals are pretty much sequential.

Thanks. I remember seeing it before, while looking for info about database internals (it was a long series of community pages you wrote). I will try to make that BlocksExplorer work and will post results.
 

I have checked your project and have extracted the logic of this function (almost 1:1 copy paste). It works for small databases (few GB in size). I can calculate a fragmentation percentage easily (by checking consecutive blocks of same globals). But for bigger databases (TB in size) it does not work as it only enumerate a small percentage of all globals. It seems the global catalog is quite big and split on multiple blocks (usually it's at block #3).
EDIT : there is a "Link Block" pointer to follow :

I wrote a script that calculate database fragmentation level (between 0 and 100%). The idea is to fetch all blocks, to find which global they belongs to, and then count how many segments exists (one sequence being a set of consecutive blocks belonging to same global (eg: in AAAADDBBAACCC there is 5 segments). It's based on Dmitry BlocksExplorer open source project. The formula is as such :

Fragmentation % = (TotalSegments - GlobalCount) / (TotalBlocks - GlobalCount)
Blocks Formula Fragmentation
AAAAAACCBBDD (best) (4-4) / (13-4) 0%
AAAADDBBAACCC  (5-4) / (13-4) 11%
ACADDAABBAACC (8-4) / (13-4) 44%
ACABADADBACAC (worst) (13-4) / (13-4) 100%
///usage: do ..ReadBlocks("D:\YOUR_DATABASE\")ClassMethod ReadBlocks(path As%String)
{
    new$namespaceznspace"%sys"//get total amount of blocksset db = ##class(SYS.Database).%OpenId(path)
    set totalblocks = db.Blocks
    set db = ""set blockcount = 0open63:"^^"_path	
    set^TEMP("DEFRAG", "NODES", 3)=$listbuild("", 0)	
    while$data(^TEMP("DEFRAG", "NODES"))=10//any childs
    {
        set blockId = ""for
        {		
            set blockId = $order(^TEMP("DEFRAG", "NODES", blockId),1,node)
            quit:blockId=""kill^TEMP("DEFRAG", "NODES", blockId)
            
            set globalname = $lg(node,1)
            set hasLong = $lg(node,2)
            
            do:blockId'=0..ReadBlock(blockId, globalname, hasLong, .totalblocks, .blockcount)			
        }	
    }
    close63set^TEMP("DEFRAG","PROGRESS") = "DONE"do..CalculateFragmentation()
}
ClassMethod ReadBlock(blockId As%String, globalname As%String, hasLong As%Boolean, ByRef totalblocks As%Integer, ByRef blockcount As%Integer)
{
    view blockId   	
    set blockType=$view(4,0,1)
    
    if blockType=8//data block
    {
        if hasLong 
        {
            for N=1:1
            {
                setX=$VIEW(N*2,-6)
                quit:X=""set gdview=$ascii(X)
                if$listfind($listbuild(5,7,3),gdview) 
                {
                    set cnt=$piece(X,",",2)
                    set blocks=$piece(X,",",4,*)
                    for i=1:1:cnt 
                    {
                        set nextBlock=$piece(X,",",3+i)
                        
                        set^TEMP("DEFRAG","GLOBAL",nextBlock) = globalname		
                        set blockcount = blockcount + 1//update progressset^TEMP("DEFRAG","PROGRESS") = $number(blockcount / totalblocks * 100, 2)
                    }
                }
            }
        }
    } 
    else//block of pointers
    {		
        if blockType = 9//catalog
        {
            set nextglobal=$view(8,0,4)	//large catalogs might spawn on multiple blocksquit:$data(^TEMP("DEFRAG","GLOBAL",nextglobal))
            set:nextglobal'=0^TEMP("DEFRAG", "NODES", nextglobal) = $listbuild("",0) //next catalog
        }
        
        for N=1:1
        {
            setX=$VIEW(N-1*2+1,-6)
            quit:X=""set nextBlock=$VIEW(N*2,-5)
            if blockType=9set globalname=Xset haslong=0if$piece($view(N*2,-6),",",1) 
            {
                set haslong=1
            }
            
            continue:$data(^TEMP("DEFRAG","GLOBAL",nextBlock) )//already seen?set^TEMP("DEFRAG", "NODES", nextBlock) = $listbuild(globalname,haslong)				
            set^TEMP("DEFRAG","GLOBAL",nextBlock) = globalname	
            set blockcount = blockcount + 1set^TEMP("DEFRAG","PROGRESS") = $number(blockcount / totalblocks * 100, 2)
        }
    }
}
ClassMethod CalculateFragmentation()
{
    set segments = 0, blocks = 0, blocktypes = 0kill^TEMP("DEFRAG", "UNIQUE")
    
    set previousglobal = ""set key = ""for
    {	    
        set key = $order(^TEMP("DEFRAG","GLOBAL",key),1,global)
        quit:key=""if global '= previousglobal
        {	   		   	
            set previousglobal = global		   		
            set segments = segments + 1
        }
        
        if '$data(^TEMP("DEFRAG", "UNIQUE", global))	
        {
            set^TEMP("DEFRAG", "UNIQUE", global)=""set blocktypes = blocktypes + 1
        }
        
        set blocks = blocks + 1
    }
    write$number((segments - blocktypes) / (blocks - blocktypes) * 100, 2)
}

Notes : 

  • Use it at your own risks. It's not supposed to write anything in database (doing only read operations) but I'm unfamiliar with the VIEW command and it's possible caveats.
  • This might take a really long time to complete (several hours), especially if database is huge (TB in size). Progress can be checked by reading ^TEMP("DEFRAG","PROGRESS") node.

Do you think it make sense to set a expansion size to something else than default (eg: 300MB), especially knowing TEMP database in my case end up being 5GB at the end of everyday (thus requiring many expansions through the day) ?

Thanks, this is exactly what I was looking for.
Interestingly, all idling IRISDB.exe processes (created because using JobServers setting) are reported using "<0.01" of CPU usage under Process Explorer. You can indeed see the total CPU cycles counter being increased with the time. This is unlike processes created using NDS / Apache (which have no CPU usage reported at all, unless they are actually doing something). Not sure if it's a big deal.

write $ascii("♥") give me 63 (which is question mark). I'm running this in Studio output window (as for the OP).

Running this in terminal works !
I got expected result and thus same result as you.

I also did the following test : create a routine with "write $ascii("♥")" inside and call it from outside (eg: Studio console). It works (so server code also works).

However I have a IRIS server where write $ascii("♥") always return 63, even in code and Terminal. Is there a settings somewhere in portal for UTF-8 support ?
EDIT : I found where it's being defined, it's inside NLS (National Language Settings). 

The server has Latin1 defined, while the working local station has UTF-8. You can define different tables per category : for Terminal, Process, ...