Every so often our competitors like to spread false information about our solutions so as to gain unfair advantage. I wanted to address this for the Enterprise Manager Snap Clone functionality, so this blog post describes the Snap Clone solution as it currently stands (November 2014).
Let’s start by introducing Snap Clone functionality. I blogged on that a few months back, but a few months can be an eternity in software development terms, so here’s an update on the salient points of what Snap Clone does for you over the various EM releases:
- EM12cR2 provided Snap Clone for NAS storage (NetApp and Sun ZFSSA). It provided RMAN backup based clones, and included the Snap Clone Analyzer to show you the storage savings you could make using Snap Clone
- EM12cR3 added in support for Snap Clone using the Solaris File System (ZFS) and admin flows for Snap Clone for PDB’s (pluggable databases)
- EM12cR4 added a lot more
- Snap Clone using CloneDB – this is the biggie, as it means Snap Clone can now be used with ANY Oracle database release that supports CloneDB, regardless of what storage it’s on
- Data Guard standby as a test master – allows offloading the impact of creating the test master from your Production environment
- NetApp Ontap 8.x cluster mode support
- Certification for engineered systems, with I/O over Infiniband
- Support for NFSv4
- And coming down the pipe, support for:
- Integrated data lifecycle management
- Snap Clone using EMC SAN and ASM
- Admin flows for test master creation
- Integration with masking, patching, upgrades etc.
Looking at it from the cloning options that are now supported, it means you can either provide full clones using RMAN Restore, RMAN Duplicate or Data Pump, or thin clones via either software solutions (ZFS and CloneDB) or hardware solutions (Sun ZFSSSA, NetApp and soon EMC). Let’s touch on some of these in a but more detail.
Snap Clone using Solaris File System (ZFS)
Snap Clone using ZFS uses a single stock Solaris 11.1+ image which can be either physical or virtual (note: it doesn’t use the Sun ZS3 appliance). It supports both NAS and SAN. If you are using SAN, then mount the LUNs as raw disk and format with the ZFS filesystem. It’s important to note here that this does NOT require any snapshot/cloning licenses from the storage vendor, as these features are available for free.
Additional features provided with this solution include compression, de-duplication, I/O caching and so on. If you also need HA in this configuration, that can be handled externally either via Solaris Clusters, or by using the HA features of the underlying hypervisor.
Diagrammatically, the configuration looks like this:
CloneDB using dNFS
With CloneDB using dNFS, you get the ability to create thin copies of a database from RMAN image copies. This uses the NFS v3 client that’s embedded in the database technology since 184.108.40.206. Currently, this is supported for single instance databases, but only on file systems (i.e. ASM is not yet supported).
The advantages of this approach include:
- It’s easy to setup
- No special storage software is needed
- It works on all platforms
- It’s both time efficent (instantaneous cloning) and space efficient (you can create multiple clones based on one backup
- It uses dNFS to improve the performance, high availability and scalability of NFS storage
Snap Clone on ASM and EMC Storage
Using Snap Clone on ASM and EMC storage provides the ability to create ‘live’ thin clones of databases that are on ASM. A live clone is NOT snapshot based but rather a live clone of the database that can be within the same cluster or indeed another one. Both single instance and RAC are supported – supported versions are 10.2.0.5 or higher of the database, 11.2 and higher of the Grid Infrastructure code. This functionality works on both EMC VMAX (with Time Finder VPSnap) and VNX storage appliances.
Diagrammatically, the configuration looks like this:
End to End Automation
Now let’s look at the data lifecycle and how data moves through this environment. To start off with, there are a few concepts you need to understand:
- Production DB – Obviously, you need to identify the production database used for cloning.
- Backups – As most (hopefully all!) of us currently do, you need to take regular backups (RMAN, datapump exports, etc.) These backups can then be used through this process as well.
- Masking / Subsetting – When moving data from the Production database, clearly it’s important to mask sensitive data. Optionally, you may also want to (or indeed, have to with very large databases) subset to reduce the storage footprint
- Test Master – the Test Master is a sanitized (see previous bullet) copy of production data for use in Dev / Test environments OR a Data Guard Standby database. It can then be used as the source for our snapshotting.
- Clones – Depending on your needs, these may be full clones or snap (thin) clones. Full clones are often used for performance / stress testing; snap clones may often be used for functional testing. Which one you use is generally determined by the amount of storage you have available to you and the size of the Test Master.
- Refresh – the Refresh process is what you use to keep your clone in sync with data changes in Production.
How these concepts all relate together is possibly best shown by the following:
A couple of points of explanation:
- Notice that the data in the Test Master has been masked but still remains in a format that looks similar to the original data. That’s important if you want to use the clones to examine performance plans that may have gone awry for some reason. One drawback of using the Data Guard standby approach is that because of its very nature, masking and subsetting is not possible in this scenario. You would need to take manual, discrete copies of the data from Production, which could of course be automated to occur at scheduled intervals.
- On the flip side, using the Data Guard Standby means that data refresh to the Test Master is both automatic and instantaneous, so your data can be much more up to date than it might be if you were using discrete copies.
- The refresh process can occur either against the Test Master or to backups of your Production database. If you have configured this as a self-service admin flow, self-service users can then refresh their existing clones with new data without you needing to be involved.
Full or Snap Clone: How It Works
With that in mind, let’s talk now about the details of how it works. In simple terms, the Test Master (or the Data Guard standby if you’re using that) is regularly refreshed with current data from Production. From the Test Master / Standby, you can make as many scheduled or manual storage snapshots or RMAN backups as you like. These are called “Profiles” and are indicated by the t0, t1, t2, … tN times in the diagram below. From each of these profiles, clones can be taken. Each usercan have a personal read-write database clone, as the data was at the time the profile was created, and of course, can take as many private backups of their clone as they desire:
Self Service Provisioning
Clearly, all of this is not much use to you as an administrator if you’re the one who has to keep building all of this, so it’s important to have a way of allowing users to build their own environments while ensuring resource usage is restricted based on roles. That’s where self service provisioning comes in. EM12c comes with an out of the box self service portal capability. You as the administrator create a catalog of different database configurations, possibly with a variety of datasets, which the self service user can then select. Have a look at the following diagram:
The larger box at the back left shows the standard Database Cloud Self Service Portal, as seen by a self service user. To the left, you can see this particular user has created 4 databases, along with the memory and storage consumption they have taken. This particular user has been granted permission to create 12 databases, using a maximum of 50 Gb of memory and 100 Gb of storage. These limits have been set by you as the self service administrator.
You are also responsible for building a number of templates to define the configurations they can choose from. When the self service user clicks on the “Request Database” button on the Database Cloud Self Service Portal, the box on the right appears, showing them the service templates they are allowed to choose from. In this example I’m showing you, they can choose from:
- Full 1.5 TB Database Clone – this is a full copy of the Production Database
- Generic Application Schema – a copy of an 220.127.116.11 database
- HR Sample Schema – allows the user to create a copy of the HR sample schema with data
- Small 200 Gb database from RMAN backup – provides a subset of an existing database for functional testing
- StoreFront Application Schema – creates a copy of an in-house application called StoreFront, complete with data
(NOTE: These are just examples of the sort of thing you can add as templates. Obviously some of these would fail if the self service user tried to create them because the resource quotas you have given them would be exceeded. 🙂 )
Now, let’s look at an example of the user interface you would see specifically for the Snap Clone functionality. As the self service user, when you select a template to create a thin clone, it takes you to a page that looks like this:
The inputs you provide are:
- Database SID – the SID for the database that will be created as part of this request
- Database Service Name – the service name that you will use to connect to this database after creation
- Optionally, a schedule – when will the database be created, and how long will it be available for. The defaults are to start the creation immediately, and a duration of indefinitely, but you can change these to meet your needs
- User Name and Password – the username and password that will be assigned for you to manage the database that is being created
- Service Instance Properties – again, these are optional but you can specify things like Lifecycle Status, contact name and so on
- Snapshots – this is really the most important part, as it is here where you specify the snapshot time (i.e. the profile) that you will use to create the thin clone. In the example shown here, we are using a profile built on September 16, 2014
Once the database has been created, you will see it listed in the Services region of the Database Cloud Self Service Portal:
By clicking on the name of the service as shown above, you are taken to the database home page (note this is still all being done by the self service user, NOT the self service administrator):
It is from here that you can click on the “Refresh” button to refresh the data to a more recent profile. Clicking that button brings up a pop-up window allowing you to select the date you want to refresh your clone to. In this example, I can choose a snapshot that was taken on September 19:
That’s how straightforward the process is! And of course, in inimitable Oracle style, there is so much more coming just around the corner. You can soon
Addressing the Misinformation
Now that you’ve seen the capabilities of the Snap Clone product, let’s come back to the issue I raised at the beginning of this post – misinformation. I’m not really wanting to take aim at any particular company involved here, just the data, so I won’t name them here except as “Product X”. While some of what they are claiming is certainly correct, some of it is only partially true and some of it is just plain wrong. The claims that fall into these latter categories included:
- Snap Clone allows customers to leverage existing NetApp or ZFS storage hardware more efficiently but Product X installs on commodity hardware – well, yes Snap Clone does allow that, but as I mentioned above it also supports CloneDB using dNFS, and ASM on EMC storage. Adding Clone DB using dNFS, which is functionality supported natively since the 18.104.22.168 release of the Oracle database, means that Snap Clone is supported on any hardware that Oracle Database 22.214.171.124 or later is supported on, not just on NetApp or ZFS. And of course, the addition of EMC storage just broadens that support base even further.
- Product X is the only technology in the industry to provide “Live Archive” – archiving and providing point in time versions of a database. This is EXACTLY what Snap Clone provides, so please don’t say your product is the only one in the industry that does that!
- Product X is the only technology in the industry to provide “Business Intelligence” – 24×7 ETL windows and fast refresh of new data in minutes. Again, not true. True business intelligence normally requires a summarized copy of your Production data, plus data from other sources as well, so any product that simply refreshes from your Production database would not have the capabilities needed by BI. If, however, your BI requirements are simple enough that they can be resolved by having a copy of just your Production data, then Snap Clone provides that capability as well.
- Product X is the only technology in the industry to provide “Data Center Migration” – “Product X supports active replication between Product X appliances making cloud migration simple, efficient and continually synced between in house and cloud.” That functionality also exists in EM12c.
- “Snap Clone is a feature that is a simple and nice enhancement for the usage of specialized existing hardware, either Netapp or ZFS storage to make static thin clones at one point in time.” – a competitor’s words. As already mentioned, Snap Clone is NOT restricted to using specialized hardware AND clones can be refreshed as needed, so this statement is just plain wrong.
- Scale, scale, scale – With Snap Clone, you can scale from 1 to 1000’s of clones. Some competitive tools that are out there would require multiple instances of their product to be deployed to achieve that, all of which adds overhead.
- Protection of your existing investments – using Snap Clone, you have the choice between hardware solutions that you might already have, as well as software solutions. We also use trusted technologies like Data Guard for test master refresh.
I could go on, but let’s leave it there and look more at another important area.
Once you have provisioned your data, there are a lot of other important areas that need to be looked at as well. It’s simply not enough to provision new databases and then leave the management of those environments alone. All that does is lead to database sprawl, creating management headaches. So what are the other areas you need to look at? They include:
- Patching – Any computing environment will, over time, require patching, either as security and bugs issues are found and addressed or as more major releases occur. EM12c provides a fully integrated patch management functionality to address this space
- Compliance – EM 12c provides a rich and powerful compliance management framework that automatically tracks and reports conformance of managed targets to industry, Oracle, or internal standards. Enterprise Manager 12c ships with compliance standards for Oracle hardware and software including Database, Exadata Database Machine, Fusion Middleware, and more. These compliance standards validate conformance to Oracle configuration recommendations, best practices, and security recommendations.
- Performance Management – EM12c includes a variety of tools to management performance, from ASH Analystics, SQL Performance Analyzer and Database Replay at the database level to a complete Application Performance Management (APM) solution for custom applications and Oracle applications (including E-Business Suite Siebel PeopleSoft JD Edwards and Fusion Applications).
- Chargeback – Chargeback is used to allocate the costs of IT resources to the people or organizations who consume them. While it can be applied in situations where IT resources are dedicated, it is particularly relevant in situations where resources are shared, as without some way to meter and charge for consumption there will be a tendency for users to use more resources than they need. This problem is exacerbated in cloud environments where users are able to provision their own resources using self-service.
All of these areas and so many more are covered in EM12c, along with the Snap Clone functionality we started this post looking at. EM12c is Oracle’s flagship management products for all your database needs, and is in sync with database releases. We provided support for functionality such as RAC and the multi-tenant architecture from day 1 of the software being released, whereas competitor products can take months to catch up. In addition, EM12c provides a full security model, including role based access control, which is used by many Fortune 1000 customers. So with all of that why would you look at a point solution that only covers one part of managing your Oracle infrastructure?