I have seen the documentation saying to build an empty VM with slightly more space for each volume than was on the physical server, then use clonezilla to create an image of the server, then import it. That seems ok, but I’m hoping someone out there has more real-world experience in doing this and can share if they did it differently, or encountered any pitfalls.

As my environment matures, I am moving from “Hey I have 1 physical server with everything on it” to “Let’s use a hypervisor and spin off services onto their own.” When the base OS is P2V’d, I’ll be able to have 2 hypervisors and start implementing HA. I’ve been using this system as a scratchpad and dev box for 10 years and would love to just migrate it over.

  • stanleytweedle@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I’ve been running PVE for about 4 years but never had a reason to try P2V migration at home or work. Kind of curious what about the host makes it worth the effort vs just rebuilding the services. I’ve become a big fan of LXC. The turnkey distros cover most use cases for home and a lot for professional needs too, at least small scale in-house stuff.

    • surfrock66@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      A looking time ago I got 2 r710s for cheap. One was a test server which had Ubuntu server with a desktop environment on it (I was used to windows server administration) and the other was a headless Minecraft server. Both have about 200 gigs of RAM, but similar CPU configs. Every self hosting thing I tried went on the main server…DHCP, DNS, jellyfin, NextCloud, Apache, vaultWarden…it got out of hand. Even then, I used the desktop over x2go as a stable environment when moving between clients. We replaced the Minecraft server, then I converted that to proxmox, then peeled off services into smaller VMs in learning to use Ansible. Now every “server” is moved off and I basically want the underlying VM as a remote desktop. It isn’t precious per se, and it is backed up, but starting over would be a headache so I wanna take a real shot at P2V.

  • daftfuder@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Previously I have done P2V onto vmware, then convert to proxmox. It worked, but it wasn’t pretty…

  • Sudo@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’ve never done P2V Linux, only Windows onto Hyper-V host. It went smoothly.

  • SniperFred@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    never done p2v but v2v before, using the starwind converter. it’s a freeware and I just read it can also do p2v. proxmox itself is not available as a target, so you might have to convert your server to an esxi-vm, and then again from there to proxmox 🤔 that might be the slowest and one of the dirtiest ways to tackle this, but it might work

    here’s the link to the converter software: https://www.starwindsoftware.com/starwind-v2v-converter

    • ang3l12@dit.reformed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      starwind converter can export to a qemu image though, which can them import into proxmox.

      I used the converter to move from hyper-v to proxmox just last month.

  • Snowplow8861@lemmus.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If this is in commercial space, there’s products like carbonite and others that kind of don’t care if the machine is a physical or virtual in some senses. If getting it done is time sensitive (eg a migration from old provider to new provider of a few dozen servers) is required in like 1 week, don’t spend too much time on which perfect world you want to live in, you’ll end up like 50% of projects which end up failing.

    If you migrate then rebuild that’s also completely reasonable.

    One thing I’m not sure of, but here’s an answer using just veeam: get your backup, export to VMDK, then convert to qcow2 or whatever. https://blog.lbdg.me/proxmox-convert-vmdk-to-qcow2/

    With that converted, then do delta syncing with carbonite.

    If you have a huge outage window like for dozens of terrabytes you might need more than a weekend, ask everyone to finish up on Friday lunch and take all servers offline, then you should be able to take the source vmdks and convert them. Just straight attach the storage to proxmox.

    One thing though is install virtio drivers in advance. Before migration begins.

    BTW I’m not an expert on proxmox since I only have a 3 node cluster on my home lab, but at work I’ve done dozens of client migrations to and from different platforms, hyperv, vmware, Nutanix, aws, azure.

    But other commenters are totally right, best to rebuild, but you don’t have to do that before, just start doing them after. Get, and onboard. Remove that touching timber. Then replace. Fewer changes at once. You’ll take fewer shortcuts in rebuild when there’s no hard immediate deadline.

    If you’re not under a deadline, don’t migrate imo. Just build new. If there’s anything you can’t do that just maybe keep it on the old platform and complain to the vendor over and over until they move it and raise it as a risk to your decision makers so it’s not your fault for performance and or legacy debt reasons.

    Hope that helps but good luck!