Quantcast
Channel: Oracle Bloggers
Viewing all articles
Browse latest Browse all 19780

Cloning Zones with Unified Archives

$
0
0
Solaris 11.2 introduces a new native archive file type, the Unified Archive. Let's take a look at cloning zones with Unified Archives.

Using Unified Archives to clone zones provides a few differences compared to dataset clone-based zone cloning, as we have with 'zoneadm clone' with non-global zones.

The main difference in using an archive rather than 'zoneadm clone' is that the clone archive image is prepared for redistribution. Rather than a full copy, the origin zone is more used as a template for the creation of a new, independently deployable image.

With clone archives, various aspects of the file system are reverted to an as-installed state, and other aspects are cleaned up and sanitized. This makes for a fully portable, migratable image within the archive payload. It can also be carried to remote systems for cloning there.

To keep the images small for our examples, we'll install our zones with the new 'solaris-minimal-server' group package. This gives us a smaller zone image which has most of the core Solaris services available. The image makes for a nice starting point for application development.

One thing to note, the minimal server image doesn't include localization support. The 'system/locale' package is quite large, but we can have our cake and eat it too by using package facets. We can add 'system/locale' to our minimal install, and turn off all of the locales we don't need by using facets.

Let's start by putting this install profile into a simple AI manifest which we'll use for our initial installation.

# cat /data/cfg/zone_mini.xml<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"><auto_install>  <ai_instance name="default">    <target>      <logical>        <zpool name="rpool" is_root="true">          <filesystem name="export" mountpoint="/export"/>          <filesystem name="export/home"/>          <be name="solaris"/>        </zpool>      </logical>    </target>    <software type="IPS">      <destination>              </destination>      <software_data action="install">        <name>pkg:/group/system/solaris-minimal-server</name>        <name>pkg:/system/locale</name>      </software_data>    </software>  </ai_instance></auto_install>

For my purposes, I'm keeping English and unsetting the rest. You can configure your install as needed.

Ok, let's install our zone.

# zoneadm list -cv  ID NAME     STATUS      PATH                  BRAND     IP      0  global   running     /                     solaris   shared  -  thing1   configured  /system/zones/thing1  solaris   excl 

# zoneadm -z thing1 install -m /data/cfg/zone_mini.xml 

The following ZFS file system(s) have been created:
    rpool/VARSHARE/zones/thing1
Progress being logged to /var/log/zones/zoneadm.20140507T150634Z.thing1.install       Image: Preparing at /system/zones/thing1/root. Install Log: /system/volatile/install.6115/install_log AI Manifest: /tmp/manifest.xml.Tfa46l  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml    Zonename: thing1
Installation: Starting ...        Creating IPS image
Startup linked: 1/1 done        Installing packages from:            solaris                origin:  http://host.domain/solaris11/pkg
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            166/166   16634/16634  163.4/163.4  2.3M/s

PHASE                                          ITEMS
Installing new actions                   26917/26917
Updating package state database                 Done 
Updating package cache                           0/0 
Updating image state                            Done 
Creating fast lookup database                   Done 
Updating package cache                           1/1 
Installation: Succeeded        Note: Man pages can be obtained by installing pkg:/system/manual        Done: Installation completed in 185.128 seconds.  Next Steps: Boot the zone, then log into the zone console (zlogin -C)              to complete the configuration process.

Log saved in non-global zone as /system/zones/thing1/root/var/log/zones/zoneadm.20140507T150634Z.thing1.install

Now we've got a minimal zone. Notice the install does a lot of package work.  Since we're starting from scratch, the deployment creates a new IPS image, validates the publishers and host image, links the zone image to the host's and then installs it. The install consists of building the list of packages, downloading them, and then invoking all of the install and post-install actions for each one. It does this very quickly, but it's a lot of work so it takes some time. In this case, about 3 minutes.

A side effect of using Unified Archives to deploy Solaris systems is that the deployment time is typically quicker than with package-based installs. Since an archived system contains the system's package image, a deployment simply lays the image back down. IPS doesn't need to do all that work again, since it already did so during the deployment of the origin system.  

So, let's archive this zone up and deploy a clone of it to see how this works.

Again in the spirit of keeping things small, we can use the -e (exclude-media) option with archiveadm create. Since we don't need a portable and transformable image for this simple example, we won't need install media. More on embedded media later.

  # archiveadm create -z thing1 -e /data/archives/thing1.uar
  Initializing Unified Archive creation resources...
  Unified Archive initialized: /data/archives/thing1.uar
  Logging to: /system/volatile/archive_log.6239
  Executing dataset discovery...
  Dataset discovery complete
  Preparing archive system image...
  Beginning archive stream creation...
  Archive stream creation complete
  Beginning final archive assembly...
  Archive creation complete

That took about a minute and a half and resulted in an archive which is just shy of 200MB. There is quite a bit of compression in the image, as we can see from the verbose output the deployed size is nearly 1GB.

# -lh /data/archives/thing1.uar 
  -rw-r--r-- 1 root root  197M May  7 09:20 /data/archives/thing1.uar
# archiveadm info -v /data/archives/thing1.uar
  Archive Information          Creation Time:  2014-05-07T15:18:59Z            Source Host:  ducksiren           Architecture:  i386       Operating System:  Oracle Solaris 11.2 X86       Recovery Archive:  No              Unique ID:  31542e88-dfe9-4e96-f39f-f622f1f2fdbf        Archive Version:  1.0
Deployable Systems          'thing1'             OS Version:  0.5.11              OS Branch:  0.175.2.0.0.38.0              Active BE:  solaris                  Brand:  solaris            Size Needed:  971MB              Unique ID:  a4ac1faf-4b7d-c6cf-f5fc-83a3b8106298              Root-only:  Yes

Now that we have an archive, we can deploy new zones directly from it.  As always, deploying a zone is two steps; the zone configuration is first created and then it is installed.

The zonecfg and zoneadm utilities have been updated to work with unified archives. This allows for direct cloning of the origin configuration stored within the archive as well as installation of a new zone directly from the archive. These two steps are not tied to each other - any valid zone configuration can be installed from an archive, the zonecfg need not be sourced from the archive.

Let's create a new zone from the archive, which will mirror the origin zone's configuration, and then install it.


# zoneadm list -cv   ID NAME     STATUS      PATH                  BRAND     IP       0  global   running     /                     solaris   shared   -  thing1   installed   /system/zones/thing1  solaris   excl  
  # zonecfg -z thing2 create -a /data/archives/thing1.uar
  # zoneadm list -cv    ID NAME     STATUS      PATH                  BRAND     IP       0  global   running     /                     solaris   shared   -  thing1   installed   /system/zones/thing1  solaris   excl    -  thing2   configured  /system/zones/thing2  solaris   excl

So, easy enough. The new zone 'thing2' has a configuration which is based upon the configuration of 'thing1'. Now we can install the new zone directly from the archive as well, with zoneadm.

# zoneadm -z thing2 install -a /data/archives/thing1.uar 
The following ZFS file system(s) have been created:   rpool/VARSHARE/zones/thing2
  Progress being logged to /var/log/zones/zoneadm.20140507T153751Z.thing2.install     Installing: This may take several minutes... Install Log: /system/volatile/install.12268/install_log AI Manifest: /tmp/manifest.thing2.jqaO7x.xml    Zonename: thing2
  Installation: Starting ...        Commencing transfer of stream: a4ac1faf-4b7d-c6cf-f5fc-83a3b8106298-0.zfs to rpool/VARSHARE/zones/thing2/rpool        Completed transfer of stream: 'a4ac1faf-4b7d-c6cf-f5fc-83a3b8106298-0.zfs' from file:///data/archives/thing1.uar        Archive transfer completed
  Installation: Succeeded      Zone BE root dataset: rpool/VARSHARE/zones/thing2/rpool/ROOT/solaris                     Cache: Using /var/pkg/publisher.
  Updating image format
  Image format already current.  Updating non-global zone: Linking to image /.
  Processing linked: 1/1 done  Updating non-global zone: Syncing packages.
  No updates necessary for this image. (zone:thing2)  Updating non-global zone: Zone updated.                    Result: Attach Succeeded.        Done: Installation completed in 104.898 seconds.  Next Steps: Boot the zone, then log into the zone console (zlogin -C)              to complete the configuration process.
  Log saved in non-global zone as /system/zones/thing2/root/var/log/zones/zoneadm.20140507T153751Z.thing2.install

Simple, and this one deploys in about a minute and a half. IPS still links the image into the global zone and does some validation, but for the most part already did the heavy lifting for us in the deployment of the origin system.

This archive can be used to deploy any number of zones on any number of host systems. The only criteria for support is that the host is a supported platform of the same ISA. This means that archives can be used for all sorts of migrations and transforms, even across virtualization boundaries. More on that later, as well.

Support for kernel zones is transparent - zonecfg and zoneadm work the same way to create and install a new zone from an archive, respectively. By the way, for a kernel zones primer and a bit more detail, check out Mike Gerdts' Zones blog.

Note when the archive is created this time, we'll need the embedded media which is built by default. This media is used to boot and install the new kernel zone. This all happens under the covers, of course. Just keep in mind that if you might want to deploy an archive into a kernel zone in the future, don't use --exclude-media.

Ok, let's create a clone archive of a kernel zone and build it a friend.

# zoneadm list -cv
    ID NAME     STATUS      PATH                  BRAND       IP      0  global   running     /                     solaris     shared  3  vandemar running     -                     solaris-kz  excl  -  thing1   installed   /system/zones/thing1  solaris     excl   -  thing2   installed   /system/zones/thing2  solaris     excl 
# archiveadm create -z vandemar /data/archives/vandemar.uar
  Initializing Unified Archive creation resources...
  Unified Archive initialized: /data/archives/vandemar.uar
  Logging to: /system/volatile/archive_log.15994
  Executing dataset discovery...
  Dataset discovery complete
  Creating install media for zone(s)...
  Media creation complete
  Preparing archive system image...
  Beginning archive stream creation...
  Archive stream creation complete
  Beginning final archive assembly...
  Archive creation complete
# zonecfg -z croup create -a /data/archives/vandemar.uar 
  # zoneadm -z croup install -a /data/archives/vandemar.uar 
  Progress being logged to
  /var/log/zones/zoneadm.20140507T184203Z.croup.install
  [Connected to zone 'croup' console]
  Boot device: cdrom1  File and args: -B install=true,auto-shutdown=true -B aimanifest=/system/shared/ai.xml
  reading module /platform/i86pc/amd64/boot_archive...done.
  reading kernel file /platform/i86pc/kernel/amd64/unix...done.
  SunOS Release 5.11 Version 11.2 64-bit
  Copyright (c) 1983, 2014, Oracle and/or its affiliates. All rights reserved.
  Remounting root read/write
  Probing for device nodes ...
  Preparing image for use
  Done mounting image
  Configuring devices.
  Hostname: solaris
  Using specified install manifest : /system/shared/ai.xml
solaris console login: 
  Automated Installation started
  The progress of the Automated Installation will be output to the console
  Detailed logging is in the logfile at /system/volatile/install_log
  Press RETURN to get a login prompt at any time. 18:43:58    Install Log: /system/volatile/install_log 18:43:58    Using XML Manifest: /system/volatile/ai.xml 18:43:58    Using profile specification: /system/volatile/profile 18:43:58    Starting installation. 18:43:58    0% Preparing for Installation 18:43:58    100% manifest-parser completed. 18:43:58    100% None 18:43:58    0% Preparing for Installation 18:43:59    1% Preparing for Installation 18:44:00    2% Preparing for Installation 18:44:00    3% Preparing for Installation 18:44:00    4% Preparing for Installation 18:44:00    5% archive-1 completed. 18:44:00    8% target-discovery completed. 18:44:03    Pre-validating manifest targets before actual target selection 18:44:03    Selected Disk(s) : c1d0 18:44:03    Pre-validation of manifest targets completed 18:44:03    Validating combined manifest and archive origin targets 18:44:03    Selected Disk(s) : c1d0 18:44:03    9% target-selection completed. 18:44:03    10% ai-configuration completed. 18:44:04    9% var-share-dataset completed. 18:44:08    10% target-instantiation completed. 18:44:08    10% Beginning archive transfer 18:44:09    Commencing transfer of stream: 072fdc78-431e-6aa6-89d5-a0088766a4af-0.zfs to rpool 18:44:17    36% Transferring contents 18:44:23    67% Transferring contents 18:44:25    78% Transferring contents 18:44:27    87% Transferring contents 18:44:31    Completed transfer of stream: '072fdc78-431e-6aa6-89d5-a0088766a4af-0.zfs' from file:///system/shared/uafs/OVA 18:44:31    89% Transferring contents 18:44:33    Archive transfer completed 18:44:34    90% generated-transfer-1447-1 completed. 18:44:34    90% apply-pkg-variant completed. 18:44:34    Setting boot devices in firmware 18:44:34    91% boot-configuration completed. 18:44:35    91% update-dump-adm completed. 18:44:35    92% setup-swap completed. 18:44:35    92% device-config completed. 18:44:37    92% apply-sysconfig completed. 18:44:37    93% transfer-zpool-cache completed. 18:44:44    98% boot-archive completed. 18:44:44    98% transfer-ai-files completed. 18:44:44    98% cleanup-archive-install completed. 18:44:45    100% create-snapshot completed. 18:44:45    100% None 18:44:45    Automated Installation succeeded. 18:44:45    You may wish to reboot the system at this time.
  Automated Installation finished successfully
  Shutdown requested. Shutting down the system
  Log files will be available in /var/log/install/ after reboot
  svc.startd: The system is coming down.  Please wait.
  svc.startd: 115 system services are now being stopped.
  syncing file systems... done
[NOTICE: Zone halted]
[Connection to zone 'croup' console closed]         Done: Installation completed in 180.636 seconds.

And there we go. We created a kernel zone archive in a few minutes and deployed a new kernel zone from it in a few more minutes.


Viewing all articles
Browse latest Browse all 19780

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>