Quantcast
Channel: Oracle Bloggers
Viewing all articles
Browse latest Browse all 19780

It was never easier than today - Upgrading/Patching Solaris Cluster 4 and Solaris 11

$
0
0

Summary

1. step: "scsinstall -u update" on all cluster nodes, while the system is running

2. step: "init 6" one node by one to perform a rolling upgrade.

That looks easy, doesn't it?

Patching Solaris 10 and Solaris Cluster 3.x

If you ever had to patch a Solaris 10 system, or even a system with non-global zones installed or even worse, clustered failover zones, you know, that this was really a hard job. It needed extremely careful planning, many hours of analyzing which patches to install, designing and testing a fallback plan, etc. And when the bell rang, you had to be fast and precise to finish your job within the downtime that was granted to you by the business people.

You get along with this task much easier using Live Upgrade, so that part of the long lasting task could be done while the system was running in production. But not everyone liked that approach.

Everything has changed with Solaris 11 and boot environments. And with the new package system IPS and its dependency framework. And with easy to maintain repositories. And with the integration of Oracle Solaris Cluster 4 into these new technologies.

It is so easy now, to update a system! Did I say this already?

Upgrading a Solaris 11/Solaris Cluster 4 cluster

Oracle Solaris 11 has added some excellent new features. Some of them, but not all are based on technologies that were already present in Solaris 10, e.g. ZFS. But with Solaris 11, all of them plus some new ones, e.g. the new packaging system, have been combined to make up a great life cycle management environment. The zones framework is part of this as is Solaris Cluster 4.

Let's look at my new demo environment in the Oracle Solution Center.

[cluster-05b:root] beadm list
BE      Active Mountpoint Space Policy Created
--      ------ ---------- ----- ------ -------
solaris NR     /          5.43G static 2013-05-15 06:56
[cluster-05b:root] cluster status
Node Name                                       Status
---------                                       ------
cluster-06b                                     Online
cluster-05b                                     Online
...
--- Zone Cluster Status ---

Name   Brand     Node Name     Zone Host Name   Status   Zone Status
----   -----     ---------     --------------   ------   -----------
zc11   solaris   cluster-05b   zc11-05          Online   Running
                 cluster-06b   zc11-06          Online   Running

[cluster-05b:root] zoneadm list -icv
  ID NAME             STATUS     PATH                           BRAND    IP
  0 global           running    /                              solaris  shared
  1 zc11             running    /zones/zc11                    solaris  shared
  5 local-shared-zone running   /zones/local-shared-zone       solaris  shared
[cluster-05b:root] pkg list entire ha-cluster/system/core
NAME (PUBLISHER)                              VERSION                    IFO
entire                                        0.5.11-0.175.1.6.0.4.0     i--
ha-cluster/system/core (ha-cluster)           4.1-2.1                    i--

There are 2 servers (T3-2, so I make heavy use of logical domains) with one control domain each and several guest domains which are clustered. The one cluster we are interested in has guests cluster-05b and cluster-06b. The nodes have only one BE called solaris, are running a very recent Solaris 11.1 SRU and a very recent Solaris Cluster 4.1 SRU. There is a zone cluster installed (zc11) plus one non-global zone local-shared-zone, which is not (yet) under cluster control, i.e. there are no failover zones.

Now, what do I have to do to update this cluster?

Very easy (I think I said that before).

  1. perform an upgrade of the Solaris OS and Solaris cluster while the system is running on both nodes
  2. to perform a rolling upgrade, do a reboot on node 1 first
  3. wait until the rebooted node 1 has fully booted and joined the cluster; then perform a reboot on node 2.

2 commands per cluster node - that is all that is needed. One thing to watch out for: you need to use the "scinstall" command for the upgrade and not "pkg update". This is because scinstall adds some cluster specific steps that would be omitted when only using "pkg update". 

[cluster-05b:root] scinstall -u update -e solaris-11.1.9
Calling "scinstall -u preupgrade"
....
Finalize: Linked images: 0/3 done; 1 working: zone:zc11
Finalize: Linked images: 2/3 done; 1 working: zone:local-shared-zone
...
done
Calling "scinstall -u postupgrade"
...
scinstall: A clone of solaris exists and has been updated and activated. On 
the next boot, the Boot Environment solaris-11.1.9 will be mounted on "/". 
Reboot when ready to switch to this updated BE.

[cluster-05b:root] beadm list
BE             Active Mountpoint Space  Policy Created         
--             ------ ---------- -----  ------ -------         
solaris        N      /          28.88M static 2013-05-15 06:56
solaris-11.1.9 R      -          6.54G  static 2013-07-31 14:50
Now the last step(s): init 6 to reboot into the new BE - one node after the other, if you want to perform a rolling upgrade.
[cluster-05b:root] init 6
[cluster-06b:root] init 6

And we are done: Solaris has been updated to S11.1.9, Oracle Solaris Cluster has been updated to 4.1-3.2. Downtime was minimal - only the time of the reboot per node. Services had been automatically switched between nodes, so there was only a minimal service interruption.

Fallback solution automatically created

And besides the easy upgrade the second best feature (or maybe the best - depending on your point of view) is: a safe fallback solution has been created, if something should not work in the new boot environment:

[cluster-05b:root] beadm list
BE             Active Mountpoint Space  Policy Created         
--             ------ ---------- -----  ------ -------         
solaris        N      /          28.88M static 2013-05-15 06:56
solaris-11.1.9 R      -          6.54G  static 2013-07-31 14:50
The old BE, called solaris, is still available. If you want to boot back into it, just activate it and reboot.
[cluster-05b:root] beadm activate solaris
[cluster-05b:root] init 6

And you are back.

Summary

It has never been easier than with Oracle Solaris 11 and Oracle Solaris Cluster 4 to update a clustered system. Keep in mind to use "scinstall -u update"!


Viewing all articles
Browse latest Browse all 19780

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>