disk image creation & restauration

Discussion in 'Linux Networking' started by Antoine Logean, Aug 6, 2003.

  1. Hi everybody,

    How can I create and copy (with which program) a disk image of my /
    partition to a /backup partition ? And more important how can I restore
    it at the boot time ? The best solution whould be to have the different
    images on a server and to restore them on each client.

    What kind of solutions exist on Linux ?

    thanks for your help

    Antoine
     
    Antoine Logean, Aug 6, 2003
    #1
    1. Advertisements

  2. the problem is that there are 12 clients that have to be reinstalled
    every morning in a pretty simple way.

    Now imagine you copy the huge tar file of the / partition on the backup
    partition. Ok. But how can you restore it automatically if the /
    partition is destroyed ? I can not boot manually each client with a
    rescue disk, reformat the /, untar the think and copy it to /. I would
    have to come at 5 AM every morning !

    do you understand my problem ?

    Antoine
     
    Antoine Logean, Aug 6, 2003
    #2
    1. Advertisements

  3. By making the partition first! Next silly question?
    No. You are an idiot. Have you ever heard of scripting? It appears
    NOT. Here, have a free conslutancy:

    sfdisk < sfdisk.save
    mke2fs /dev/hda5
    mkswap /dev/hda2
    mount /dev/hda5 /mnt
    tar xzvfC /image.tgz /mnt

    Put it in /bin/rc on the live cdrom, and boot with init=/bin/rc.

    That will be $0. Plus the cost of my education. Which makes up for
    yours.


    Peter
     
    Peter T. Breuer, Aug 6, 2003
    #3
  4. Ooof. Dude, you need to learn how to use tftp to install disk images
    online. And include the details when you ask for solutions, the devil is
    in the details.

    Also, geneerally ignore Peter. he cops a really hard attitude on the
    newbies, and his answers often leave out critical bits.
     
    Nico Kadel-Garcia, Aug 6, 2003
    #4
  5. -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1

    Antoine Logean wrote:
    | Hi everybody,
    |
    | How can I create and copy (with which program) a disk image of my /
    | partition to a /backup partition ? And more important how can I restore
    | it at the boot time ? The best solution whould be to have the different
    | images on a server and to restore them on each client.
    |
    | What kind of solutions exist on Linux ?
    |
    | thanks for your help
    |
    | Antoine
    |
    You could use find and cpio (in -p mode) to do the copy from / to /backup.

    You could use it the other way to restore.

    Why would you want to restore at boot time? If your hardware is so bad
    that a total restore is required every time you boot, you should spend
    your time getting the hardware fixed.

    As a user, I would find having my files all restored to some time in the
    past quite intolerable. It would mean that nothing I did between reboots
    would, in fact, have been done.

    What is the real problem you are trying to solve?

    - --
    ~ .~. Jean-David Beyer Registered Linux User 85642.
    ~ /V\ Registered Machine 73926.
    ~ /( )\ Shrewsbury, New Jersey http://counter.li.org
    ~ ^^-^^ 7:10am up 15 days, 12:03, 2 users, load average: 2.23, 1.98, 1.53
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.2.2 (GNU/Linux)
    Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

    iD8DBQE/MOOxPtu2XpovyZoRAppIAJ9fy3XyInHYahZKfmLNQGL90jlQKgCcD6M7
    2z9d2FlOMna2cw0ht+9KnA8=
    =l6js
    -----END PGP SIGNATURE-----
     
    Jean-David Beyer, Aug 6, 2003
    #5
  6. *Sigh*. The bit that Peter entirely left out, under the "pay no
    attention to that man behind the curtain" approach to technical support,
    is the part where the machine has to auto-reboot every morning at 5am
    and load the scripting to do this.

    There are a couple of ways. A locked down partition that has these tools
    embedded in it and manipulates the LILO reboot arguments or grub.conf
    arguments to reboot once and once only with the newly installed
    partition is possible, and reload the whole mess every time after that,
    is one trick. But this can be done more gracefully with tools such as
    tftp and various auto-installation tools.

    Another is to leave a spare partition to install the OS image into,
    usually in scratch space or another disk, reboot to that partition, then
    recopy *that* OS image back to the original partition. You can get away
    with quite a lot of tweaking this way.

    Keeping the tarball or other image up to date is its own problem. Either
    designate a machine as "the source machine(tm)", or once you've made a
    tarball, you can uncompress the tarball to a specific directory and
    "chroot" to that directory to do all sorts of reconfiguration, update,
    etc. without even requiring a dedicated machine to work from.
     
    Nico Kadel-Garcia, Aug 6, 2003
    #6
  7. Jean-David Beyer wrote:

    This is extremely common in computing cluster machines where user's do
    not *have* local home directories, and all software should be
    re-installed regularly to prevent people leaving littls packages or
    messed up configurations for each other.
     
    Nico Kadel-Garcia, Aug 6, 2003
    #7
  8. I never heard of such a thing. If a user has no local home directory,
    how do people leave little packages for a user? Surely the ordinary
    users are not in a position to create home directories on the local
    machine for other users (or even themselves). If I understand the
    situation you describe, I would assume you set it up so no local files
    of any kind can be created (except by the super user), so local users
    cannot cause any problems like this.

    Anyone screwing around would have to do it on the remote file server,
    and that should be set up so users can affect only their own files.

    What am I missing?
     
    Jean-David Beyer, Aug 6, 2003
    #8
  9. Antoine Logean, Aug 6, 2003
    #9
  10. Home directories are AFS or NFS or SMB mounted from a local server.
    Experience. If you leave machines up and running 24x7 with no flushing
    of the OS, people *do* leave little love packages. And because
    UNIX/Linux are such fun and powerful operating systems, and because if
    you have shell or X windows access you can run programs out of "/tmp"
    which absolutely must be read-write-execute for all, you can't really
    prevent them from running installing and running programs locally.

    It's often fairly trivial to set up a server for FTP, IRC, pirate
    software web sites, etc. running on a port for your buddies to use as a
    server from off-site, or given some time to play around you can run a
    fake login interface that steals people's passwords, or lock the screen
    on the machine so no one else can use it until you unlock it or the
    machine is reboot it, etc., etc. Take a look at the David LaMacchia case
    at MIT from a few years back for examples of what can happen.

    Also, the "flush me every day completely" is a good way to make sure the
    machines get *all* the upgrades and are in a configuration known to the
    admins, without having to integrate a new set of patches on top of an
    older running operating system and make sure you wound up with the same
    expected state.
     
    Nico Kadel-Garcia, Aug 6, 2003
    #10
  11. Antoine Logean

    Felix Rauch Guest

    [f'up to comp.os.linux.setup]

    I would suggest to use a more recent version of dolly. The most recent
    is 0.57 (see [1]) and is thus much more recent than 0.2.

    Please note that Dolly is only a tool to distribute large (files) or
    partitions to any number of nodes in a switched network. To do what
    you want to do I'd suggest to boot your nodes with PXE into a small
    RAM-disk--based environment. Then, start Dolly remotely on all clients
    and clone your disk from the master.

    - Felix

    [1] http://www.cs.inf.ethz.ch/CoPs/patagonia/#dolly
     
    Felix Rauch, Aug 6, 2003
    #11
  12. But they can't do that on linux.

    In any case, the standard solution to that situation is a boot via
    bootp and an nfs root. Heck ... they can even spend some time during
    the boot copying stuff to local.

    Peter
     
    Peter T. Breuer, Aug 6, 2003
    #12
  13. They can't. As to what they do in /tmp or their home directory (nfs
    mount), that's their business.
    They don't install. They can put whatever they like in /tmp. There's no
    harm at all in that.
    It's trivial, and stopped by closing access for ports above 1024.
    They always have the right to run such things. If they didn't, then
    wouldonly have a finite number of programs they could run and therefore
    they would not be using a general purpose computing machine, but an
    appliance.
    Anyone can break a screen lock with ctl-alt-bkspace.

    I simply check the md5sums of every file every day. There are no
    problems with what people put in tmp. Mind you, if somebody did invent a
    fake login screen I'd give him extra marks ...
    All files are crosschecked all the time. Typical output:


    --- /etc/md5check-1 Tue Aug 5 08:57:58 2003
    +++ /etc/md5check-1.new Wed Aug 6 08:59:17 2003
    @@ -1,14 +1,8 @@

    -There are 4 scanned files that differ between machines
    +There are 2 scanned files that differ between machines

    ---------------------------------------------------------------------
    -/.viminfo: ( 1) 9702c9c5f9a0667dd85dea94ccbc08c3
    - : it018 !UNIQUE FILE!
    DEBUG sigs = 20, file = /boot/map

    --------------------------------------------------------------------
    -/lost+found/#6845: ( 1) 6f6043049187e557ddb24cce457eda19
    - : it007 !UNIQUE FILE!

    ---------------------------------------------------------------------
    -/lost+found/#6863: ( 1) b7d8a76f482fbc2ea5ac5ea1ec6f2d1a
    - : it007 !UNIQUE FILE!
    +/lost+found/#6850: ( 1) b7d8a76f482fbc2ea5ac5ea1ec6f2d1a
    + : it008 !UNIQUE FILE!

    --------------------------------------------------------------------


    One example of rogue sysadmin, and some random minor corruptions.



    Peter
     
    Peter T. Breuer, Aug 6, 2003
    #13
  14. Not on a cluster or shared machine. Installing it in "/tmp" counts as
    installing it, and running an inappropriate or unauthorized service
    after you've logged out (which such love packages can easily do) is a
    potentially serious problem. Shared workstations should not be used by
    people not logged into them unless that's local policy to permit it, and
    it rarely is.
    Horse pucks. If I leave a pirate FTP or FSP server running out of /tmp,
    or a lovely little Xtank server for everyone to use after I leave the
    cluster and log out, I can easily cause all sorts of bandwidth problems
    for the cluster as well as making the machine unusable for others. And
    that sort of abuse is simply too easy to do.
    Horse pucks. Getting the firewall configuration just right to restrict
    incoming access for ports above 1024 is often a nightmare. And you can't
    entirely restrict it, since TCP does a fascinating bit of handing off of
    ports to allow the services on remote machines to actually respond back
    on a non-privileged port.
    While they're logged in, sure. After they log off and leave the cluster?
    Or leave it running more than 24 hours tying up public or shared
    machines? Nuh-uh.
    Nonsense. In can vlock all the terminal sessions and turn off the X server.
    This requires your kernel/glibc not to be screwed with. There are some
    *nasty* hacks going around that actually trick the md5sum into
    misreporting the checksums, including some loadable kernel module hacks.
    And you just entirely gave up on monitoring /tmp contents, which are
    therefore dangerous.
     
    Nico Kadel-Garcia, Aug 6, 2003
    #14
  15. Good for you. Longer for you on UNIX, much less on Linux. I assume you
    also haven't really tried to run cluster or workgroup machines extensively?
    You forgot /usr/tmp, the TeX/LaTeX mkfont capabilities, various servers
    that put binaries in /var (such as /var/ftp for FTP anonymous login,
    /var/www/cgi-bin for some Apache implementations, etc., etc.)

    If your site can work with that kind of restriction, fine. But You'd
    effectively break some of the default funcationality of a typical
    machine and potentially generate a very serious maintenance headache
    which could get you slapped down by your manager or the users themselves
    revolting.
    Because people may be logged in and doing work at 2:01 AM. If you're
    going to kick them off the systems anyway, why not just do a clean
    system flush? It also avoids a lot of the potential headaches of trying
    to maintain systems in the field and keep the security patches up to
    date, since they all get flushed on a regular basis.
    *Sigh*. It's non-trivial, to say the least, to chroot users so that
    *only* /tmp and /var are made distinct from other users. I've done some
    interesting work with chroot, for OpenSSH use, and it's a very powerful
    tool, but has limitations.

    For example: if you want to provide such users access to local copies of
    perl or gcc and avoid the potentially quite serious performance and
    mixed environment maintenance hits of running it from a remote
    fileserver, you'd have to either NFS read-only mount or hard-link /usr
    into *every chroot user's home directory*. This way lies utter support
    madness.
    You've also just stuck the "/tmp" directory in everybody's home
    directories, and potentially expanded the required disk space of your
    primary fileserver by a huge factor.

    I think not.
    Well, true. But it flushes the machine every night and helps prevent it
    from being a widely published/"reliable" warez site.
    You upgrade the backup image. This is straightforward by uncompressing
    *that* into a local directory on a specified work server, chrooting into
    that directory, making the changes, then exiting and rebuilding your
    backup image. I've done that extensively and successfully for any number
    of systems.
    It doesn't, really, but it does have to do with his followup that the
    systems are for students to learn security tools on a nightly rebuilt
    set of systems. Installation of some security tools and software tools
    depends on what order they were done in, and whether there were previous
    software configurations or changes made first. The differences aren't
    huge, but they can be quite confusing when upgrading versions of, say,
    Apache or PHP that make post-installation script based modifications to
    the configuration files. Voila, two machines that nominally have the
    same software have differentn checksums and potentially different
    behavior because of when their software updates were done and in what
    order, whether an intermediate software update has been discarded and
    replaced with a newer one, etc., etc.

    I've got scars from this kind of whackiness. It's why I really, really
    like working from a spanking clean disk image.
     
    Nico Kadel-Garcia, Aug 6, 2003
    #15
  16. It doesn't. It's just "there", not installed ...
    That I agree with. So firewall off the high ports.
    Uh .... no http servers? No ftp servers .. well, I suppose it depends
    what you mean by logged in. Authenticated and authorised, shall we say?
    That's different. Putting stuff in /tmp is fine. Running a service is
    different.
    I've never had any trouble - you can simply close them all off to nonlocal
    IPs, which should do nicely and never mind the niceties.
    You can reap old processes, but I for one certainly don't object to
    people runningf jobswhile they're not logged in!
    Well, I would frown on that, but it's not a disaster. Too much of that
    and I might warn them.
    Hit the reboot button.
    That's OK. It'd be caught next reboot.
    Don't worry about it. I know about them. One can see the module load
    via anomalous behaviour, inclusing a miscount of processes and entries
    under /proc.
    I don't monitor /tmp contents, just as I don't monitor the contents of
    peoples $HOME. They can put what they like there.

    Peter
     
    Peter T. Breuer, Aug 6, 2003
    #16
  17. Antoine Logean

    James Knott Guest

    Put 'em on the server and mount an NFS share.

    --

    Fundamentalism is fundamentally wrong.

    To reply to this message, replace everything to the left of "@" with
    james.knott.
     
    James Knott, Aug 6, 2003
    #17
  18. One question. Have you considered using bootp and having a script to load
    the kernel and such from the network (also meaning no boot disk needed if
    your network cards support booting on power up) so the kernel loads into
    memory and a script wipes the disks restoring the image before passing
    control back to the vmlinux kernel file on the disk....

    I myself have never used this method and do not know the minutae of how to
    implement this method - would it maybe work better for you?

    Mike.
     
    Michael Forster, Aug 6, 2003
    #18
  19. Antoine Logean

    jmh Guest

    jmh, Aug 7, 2003
    #19
  20. The simplest way is to use "dd" to copy the filesystem to a file ("dd
    if=/dev/hda1 of=/backup/hda-hda1.backup") but IIRC, you'd have to restore
    it to the same geometry filesystem as the original.

    A more flexible program might be "partimage" which you can find on
    freshmeat.
     
    John Thompson, Aug 9, 2003
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.