Introduction and Important Notes
Before I go into the details on this, I just want to provide some context and back story into this.
I will also list the references that I used at the end of this article to make sure you get to the source of this if you want to do that.
Also there are few important notes I think they worth to be mentioned so I’ll list them in this section as well.
Contents
My Use Case and Environment
So the reason I was forced to do that was the result of a mess up that happened between our team and the sales who was handling a customer requirement.
The customer wanted to have a 1 year commitment with #Google on an E2 instance with a certain amount of CPU and memory. In addition to that they wanted to reduce the size of the PD that is attached on the compute engine instance from 1TB to 400GB.
While we were rushing to close this request and move on to another one (considering this is a small one compared to some other opportunities we were working on) we missed that last part about the PD resize.
At the time when we wanted to apply this change, we successfully enabled the commitment on #GCP, and it showed as active the next day. However the customer still insisted on having their 400GB PD instead of the 1TB (which is his right of course since we told him it is doable). Me being a possibility thinker, I never like to give up on something as long as I’m even 1% (that’s one – I did not miss the zeros, so I mean 1 percent) convinced that there a chance for it to be done then I’ll keep on it.
So the customer had a Windows Server 2012R2 server and a 1TB storage that needs to shrink somehow…
The Limitation
Now it is known that extending the size of a virtual disk is something of the basics and every single virtualization platform can do that. But what about reducing the size?
Well not that simple.
In Google Cloud Platform (GCP), the size of a persistent disk (PD) depend on one of two things:
- Either on the size that you put in the disk creation process (if you are creating a blank disk)
- Or on the size of the source disk that we used to generate a snapshot, or image, to be used then to create a new disk.
To explain point number two in a little bit simpler way, consider that you have a compute engine (#GCE) instance with a boot disk that is 200GB in size. When you create a snapshot or an image of that boot disk, then you use that snapshot or image to create a new disk, you will never be able to go below 200GB for the new disk size, because the source was 200GB, this is also true if you even import the image into GCP from other platform such as #VMware or #HyperV or from anywhere…
The Catch – or Trick…
That last sentence in the last paragraph was where all the problem is.. and the solution as well. You see, for the case of our customer, all I need to have is an image with the source boot disk is either 400GB or less. So I set that as my goal and focused on getting that done.
You Have to be Warned!
Now before going into the details, please consider the below points:
- This is NOT a fast process. You WILL NEED time. And that time can be 1 or 2 days until you complete the whole thing. So plan for this, and if you have live data coming to the system you want to work with, then put this in your plans to have some sort of a last minute back up and restore plan before going live with the new size.
- This will also consume a lot of time. You will need time to:
- Create the full backup from the source system.
- Uploading the backup to a Google Cloud Storage bucket (GCS)
- Downloading the full backup from the #GCS bucket into your system
- Restoring the backup to the temporary server
- Uploading the new virtual disk into GCP images and convert it into a GCP image
- Creating a new disk from the new image with the new desired size.
- This also requires good internet bandwidth because the time will depend on how fast you download and upload.
- There will be some additional costs on GCP associated to this.
- This is not a friendly process – you might have to do some trial and error or restart the whole process if you want to change any thing in the final result. If you are not patient enough, then I suggest you just rebuild the whole system on a new VM with the desired disk size instead of going through the rest of this article.
If you are fine with all of the above, then I love you 😆 and you can continue!
Prerequisites
With that long story and introduction, I guess everything is good to get going in this.
There are few prerequisites that you need to be aware about and have them ready:
- Windows Server 2012 R2 ISO image (if you have the same case as I with Windows Server 2012 R2, otherwise any matching Windows ISO will work).
- In my approach I used Hyper-V on my Windows 10 Pro, and you will need to have a Hyper-V server for this.
- Local disk space that is enough to hold the full backup that you will create, and also the VHD file of the VM that you will create on Hyper-V after you will restore the full backup to it.
- You will also need a tool to create the volumes and partitions on the Hyper-V VM. I used #GParted ISO image.
Getting Into the Thing..
Resize the volumes in the VM on GCP to or less than the desired final size
This is a very important step. Through this you will get to decide the final size or the target size that you want to go down to.
In my case I resized the disk from 1 TB to 400 GB. I should have gone lower actually but that was something I did not think of at that moment. I could have gone for the smallest possible size so that I can have some freedom when doing the final steps..
Anyways, in here you will get to control how the final smaller size will be so go small as much as you can.
Take a full backup of the VM using Windows Server Backup
Your next step is to take a full backup of the VM using Windows Server Backup. Make sure you take the full volumes.
In my case initially I have taken a full backup including the bare metal recovery option as well. That at the end turned out to be useless as I only needed the volumes backup. So it is up to you if you want to just take everything to be sure or just select the volumes on the VM. You need to select ALL the volumes.
You can make a temporary new persistent disk on GCP and connected to the VM and then use it to store the backup on it, as Windows Backup will not allow you to store the backup on one of the included volumes in the backup.
Once the backup is done, you will just need to download it to your local environment. In my case first I had to upload it to a GCS (Google Cloud Storage) bucket, then download it from the bucket to my system.
Setup a local VM on your local environment
This will be the recovery target. Based on this I got my results on GCP again. The point of this is we want to use its virtual disk and migrate it – or upload it to GCP then migrate it – to Compute Engine.
You can do any environment you have, but make sure you keep it simple or basic. I used Hyper-V because I had Windows 10 Pro, and created a generation 1 VM with a disk file type of VHD (NOT VHDX). If you want to experiment with other settings then it’s up to you really.
The VM virtual disk must be sized to 1 or 2 GB larger than the total size of the volumes that you got after resizing the VM on GCP. For example in my case I had C & D on the VM in GCP. I just resized each one of these to 200 GB so I ended up with 400 GB total. On the local VM I created a virtual disk with total size of 402 GB.
Setup the disk layout to exactly match the one on GCP
This is a very important step also. If you don’t do the exact volumes layout that GCP want then you might not be able to boot the final image on GCP.
If you look at the disk manager of any Windows VM on GCP you will notice there are no hidden system or other boot partitions. GCP just makes one single partition or volume and puts everything on it. So you will only find C for example or in my case C & D.
In this step you need to either use GParted ISO image to do the partition layout or if you are comfortable with doing it with diskpart from the Windows Server ISO then that will save you a few minutes I guess.
In my case again I had to create 2 partitions C & D, each I set to 201 GB.
Manually restore the volumes one by one
If you are not in WinPE environment then it is now the time to boot from that Windows Server ISO and go to the repair option.
There are 2 important tasks you need to do to get this going:
- Initialize the network: if you are going to access the backup through network – which might make sense considering both the backup and the VM are actually local to your system, so you may not need to waste more time to move the backup into another separate virtual disk and directly mount it to the local VM
This reference helped me with the commands of WinPE – basically the command I needed to run was:
Wpeutil InitializeNetwork
- Find the proper naming of the backup to restore: basically it is just knowing which partition to restore to which.
You will need to access the backup metadata and look for these information. To do that, these commands are helpful:
wbadmin get versions –backuptarget:<X:>
- Replace the <X:> with the whole path to the location where the folder WindowsImageBackup is located.
- This command will output the “versions” stored in side the backup. If you specifically created this backup for this job and there was nothing before it, then you should only see one single version named in date/time format.
- This should be the version that we will need to you. You will need to copy the version ID into the clipboard.
wbadmin get items –version:<version_ID> -backuptarget:<X:>
- This will give you the details of the backups stored inside the folder. Again replace the <X:> with the path to the WindowsImageBackup folder, and also put a mark on this command because you will need it in just a bit.
- Also replace the whole <version_ID> with the version ID you got form the previous command.. What you should also confirm here is the size of your partitions that you created earlier is either equal or larger than the volumes contained in the backup archive.
- This command will also give you the volume IDs that you will need to use to start the recovery. In my case I had C & D and it gave me the value for both of these.
Now that you got some “situational awareness” about the local environment and what’s inside the backup archive, it is time to restore stuff.
You should already have the version ID of the backup in clipboard (if you did as advise and copied that, otherwise re-run the first command of the previous 2 above, and get the version ID), and you should already have a partition layout with volumes equal or larger than what’s in the backup. Start by recovering C, with the following command:
wbadmin start recovery –version:<version_ID> -backuptarget:<X:> -itemType:Volume -items:C: -recoverytarget:C:
- Replace <version_ID> with the value of the version ID of the backup that you want to restore from – explained just above in the 2 commands listed.
- Replace <x:> with the path that leads to the WindowsImageBackup folder as described above as well. Don’t forget to run wpeutil InitializeNetwork if you have the backup on a network location.
The above will start the recovery of the data only, we still got to fix the boot. Once the recovery for C is done do the same for the remaining volumes – I had to do it for D drive as well then I was done.
Now we need to fix the boot and the boot into the recovered system on the smaller disk size.
First of all you need to mark – or ensure that – C drive as active. The steps for that are just straight forward. just follow these commands:
- diskpart
- select volume C:
- active
- exit
Once you are done, then only the boot to be fixed.
Fix the boot manager (BCD)
This is a straight forward process.
You need to rebuild the boot manager and let it detect the Windows installations and create the required entries.
- cd C:
- bootrec.exe /fixmbr
- bootsect.exe /nt60 all /force
- del c:\boot\bcd
- bootrec.exe /rebuildbcd
For me once I was done from these commands I was able to boot normally into the recovered system.
Confirm everything is good and shutdown the local VM
Now that I got inside the system, I had to verify everything is well. I noticed one issue only which was easy to fix.
In the event manager I noticed an issue related to MSDTC, so I just typed msdtc -resetlog and everything went fine.
Once I made sure no other issues are found and everything is stable, I shut down the server to prepare to move it to GCP..
Convert the virtual disk to a GCP image
This is a very straight forward step. First of all the disk I’m going to upload is actually came from a GCP image, so while there is a predefined check list and steps that I have to go through to prepare my image to be migrated to GCP, I just did not care again because this was originally a GCP image and I just want to run it there again.
I simply uploaded the VHD file in my case to Google Cloud Storage, then went to GCP -> Compute Engine -> Disks and created a new disk and set its source to be a VHD file. I set the option that it contains an OS and I let Google add their own product key to the image (it is theirs anyway), and before that of course I told it that it contains a Windows Server 2012 R2 OS.
Once the disk creation is done, I then created an image from that disk, then once that done (took a while) I simply created a new disk again from that image and stopped the current VM on GCP that has the large disk and detached the large disk, then attached the smaller disk and started up the VM again.
Once the VM booted up, everything worked fine and at the end we got our 400 GB disk and our customer became happy!
Of course I did some clean up and deleted the now extra resources and unused stuff to make sure the cost does not go up with no use.
References
https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/wpeutil-command-line-options
Disclaimer
This process as you have seen is very long and time consuming and it might not worth all the effort. However due to the situation we found ourselves in, I was forced to go through this path to make sure we satisfy our customer and fulfill a commitment that was given to the customer.
It is up to you really if you want to go through the same or not. I just shared my experience and I hold no responsibility at all on the results and the outcome for you or what would and might go wrong for you. So please consider all the implications and the consequences and requirements before you start on this.
Checkout my other blog posts here.
Check out my channel on Youtube and subscribe :-):
No responses yet