3 most popular methods for deploying Apple computers

This article, 3 most popular methods for deploying Apple computers, originally appeared on TechRepublic.com.

Best method to deploy a Mac
Best method to deploy a Mac

In the past, I've espoused upon the timesaving, work-reducing benefits of imaging or cloning desktop computers for use in deploying new or refreshed equipment. However, as was recently brought to my attention by several astute readers, the deployment of computers and their relevant software applications are divided beyond the traditional thick vs. thin imaging camps. There's also a growing trend taking a cue out of the BYOD playbook that advocates no imaging.

Interestingly enough, no imaging means just that: no cloning of any kind. This is close in concept to thin imaging, which contains the basics necessary to get the system operational, along with a few apps, and is usually restricted to required agents and settings. These exist in stark opposition to the everything-but-the-kitchen-sink mentality prevalent with thick imaging.

Let's begin by taking a look at each method, and then we'll drill down further into what makes them work well (and not so well) for certain environments. We'll also examine the relative benefits that may be gleaned from changing out a deployment style for another, including employee productivity, downtime, and impacts on the network.

I. Deployment methods

1. No imaging

By definition, this method is simply an OS installer that loads the initial OS or uses the existing OS that comes pre-installed when purchasing new equipment. No cloning whatsoever is used to deploy the OS, which results in a clean, never-before-booted OS X installation that is (for all intents and purposes) identical to the user experience that you'd initially achieve in starting up an off-the-shelf Mac.

However, the actual software installation and settings configuration are handled after the device performs its first boot, using any number of 1st and/or 3rd-party deployment suites or scripting to achieve the desired result of deploying a production-ready computer without any of the cruft carried over from a previously cloned desktop.

2. Thick imaging

Also known as ghosting or cloning, thick imaging relies on a master computer to be effectively configured by the sysadmin, with all software packages installed, plus configurations modified and tested to ensure the machine is working 100% as it should be. Once this has been confirmed, a bit-for-bit image of the computer's hard drive is captured and used later during the imaging phase to deploy Macs.

The captured image represents a full set of files and folders, data, applications, settings, configurations, and system files, including updates that create the complete, working computer to which all others will be cloned to work just like the original master computer. This is commonly referred to as a "golden image," since it's designed with the thought that it will contain every last piece of data necessary to get the computer up and running in a ready-to-use state. No post-deployment updates or software installs are required, because the image has everything it needs.

3. Thin imaging

Thin imaging is the leaner, more efficient sibling to thick imaging. Thin shares similarities with thick in that both imaging concepts aim to deploy the OS in a configured state with the latest updates. However, from that point on is where there's a difference, because thin imaging tries to keep the overall image size down as much as possible by excluding many (if not most) of the apps to be deployed, choosing instead to install those through other avenues, such as Apple Remote Desktop or Deploy Studio workflows.

The end result is an image that's mostly production ready, yet may still require application packages to be installed separately in order to achieve a fully usable state by end users.

II. Deployment impact

With a clearer understanding of what each method entails, let's look at the impact each method has on a few different, yet equally important fronts before, during, and after the course of deployment.

While it should be noted that all three methods are like roads leading to the same destination, how one goes about getting there is entirely different, because each method is unique in how it impacts the daily operation of a production environment.

IT department/Systems administrator

Beginning (and subsequently ending) with IT, there are several decisions to take into consideration when choosing a deployment method, including the environment, which will be covered in its own heading in section III below.

The most obvious difference between all methods relates to the size of the image, which in turn correlates to the amount of data enclosed in the deployment payload. For example, a thick image captured with a complete OS X "Yosemite" installation and the latest updates, along with Microsoft Office 2011 for Mac, Adobe CS6 Design Standard, the full iLife and iWork suites, alongside Google Chrome and Mozilla Firefox browsers and internet-ready plugins (Adobe Flash, Oracle Java, Microsoft Silverlight) would clock in at roughly 15 GB in size. In comparison, it's a little over 5 GB for a base install or about 7 GB for a thin image. Obviously, the thick image will take up more storage space than the other methods. Multiply that by the number of nodes to deploy, and you may find that the larger the number of clients being supported will also mean that several servers will be required to offset the load come deployment time.

Another consideration involves the creation and testing process and how this impacts IT, due to the size of the support staff. Setting up just one Mac and creating a golden image (thick) will take a few hours (maybe more), depending on the number of installations and configurations necessary. Add several more hours to thoroughly vet the image, and you're left with a deployment process that works the same on each computer it's deployed to. By contrast, if an error is made--and let's face it, even IT makes mistakes--this error will be replicated across each device.

Is the IT department staffed to handle such issues? Are there enough personnel to allow for the creation of a working, tested image while still keeping up with daily demands?

If choosing the thin or no image route, then other problem avenues exist. Is the infrastructure robust enough to handle the installation of the OS over the network--and later, the applications and settings? Is the sysadmin adequately trained to administer the management suite that's used to push out software updates and modify configurations after the fact?

These are all valid questions that should be asked (and answered) prior to considering each method's impact on the subsequent sections.

Network/Network administration

Perhaps this next impact is the greatest of them all--namely, how each method will affect the shared resources over the LAN. In particular, the bandwidth requirements of each method will vary wildly and during different times.

No imaging or using the existing, pre-installed OS has the least initial impact on the network, as OS X is already installed. However, as software is installed and changes to the settings are made from the management suite or server, these changes will carry out over the network to the client nodes. Depending on what's being deployed and to how many nodes at any given time will be the deciding factor in "breaking the network." The benefit is always being able to scale deployments so that it doesn't affect other users too much. However, the downside is also a staggered deployment that may end up extending the project timeline beyond acceptable limits. After all, time is money in business, and the longer it takes to complete a project, the more it costs the company.

Thick imaging, regardless of how few computers are being deployed, is always like taking a wrecking ball to the network bandwidth. The benefit is a complete, ready-to-use desktop once it's done; the downside is that moving such a large file across the network at once leaves very few resources available for end users to get work done.

Protocols, such as multicast, go a long way to minimize these types of impacts on the network, but there's still quite a disturbance that's hard to ignore. This is especially true for small- to medium-sized networks, with less bandwidth options than larger, corporate entities, where a deployment of a large enough magnitude could essentially cripple the network during the deployment window vs. the smaller deployments that still hammer the network (albeit at a lower rate) on and off until all computers have the desired apps and settings installed.

Employee productivity

We touched on this above with regards to network bandwidth issues, but the focus is continued here, as employee's productivity will suffer dramatically the longer they must wait on a new desktop to complete setup or a refreshed computer to complete the configuration process.

Where thick imaging is concerned, the size of the file speaks to the completeness of the overall image. While it does take significantly longer to push a larger image over the network, if it's created properly, the end user will be able to get back to work as soon as the process has completed. No one will have to wait on settings to trickle in or software to get deployed post-imaging.

In these cases, the thin and no imaging methods are often detrimental to productivity, because once the deployment process is complete, the next wave of deployment processes begins--this time aimed at software installs and settings changes, which are often scripted or part of a larger management suite, such as OS X Server's Profile Manager. All of this is required to get the machine into a production-ready state for the end user.

Also, relying on a separate function or service introduces additional variables that can't be avoided except by manual intervention. For example, if the server running PM goes offline. The desktop will have a clean copy of OS X installed, but little else. This will render it more or less useless in the eyes of the end user--and management--until it's back online fully.

Downtime

This is a necessary evil. It plagues each method to different degrees but affects each in the same exact way. It's the period of time from when the desktop goes from setup to production-ready.

During this time, the node is effectively offline for use by end users. While this may represent an insignificant interruption for larger firms, which may be able to accommodate end users by moving them physically to another station temporarily, small- and medium-sized businesses may find the downtime unacceptable due to the loss in their bottom line.

As mentioned previously, time is money. The less downtime there is, the less money the company loses. In planning a deployment strategy with that in mind, a week of downtime to deploy a fleet of Macs from setup to ready-to-use is agreeable, even if the deployment process is interspersed between the initial OS configuration and another workflow to deploy software is necessary. It still offers the company the ability to continue working, even though there are niggling interruptions throughout.

A thin image deployment helps to bridge this fine line between working and not working. After all, in a thick image scenario, the network may not be able to bear the weight of all those deployments in one fell swoop, thus causing the project to linger and further erode the deadline and extend downtime.

Using this rationale, no imaging is a strong proponent in uptime since most of the heavy lifting, so to speak, is already completed. If the correct supporting processes are in place, a desktop may be production-ready within minutes, depending on the business needs.

III. Deployment environment

The environment is a large enough consideration to warranty its own section. This is mainly due to the fact that the resources available to the environment will place a heavy influence on what deployment methods will work best--or not work at all.

Some of the more common business environments that sysadmins will encounter are large, corporate office buildings with ample bandwidth, enterprise-level networking equipment, generous WAN connections, and large, powerful servers with dedicated, knowledgeable IT support staff, and management consoles to easily create and deploy software, images, and more that can be controlled easily.

For those providing IT services under those ideal conditions, you definitely get to choose which method best works with your skill and maintenance schedule. However, for those working with less then the acceptable network equipment, underpowered or no servers whatsoever, low upload connectivity speeds and/or variable power grids, off-the-shelf (or open-source) software solutions, and you're the lone star of the IT department.... well, you've definitely got your work cut out for you!

Remote offices with no staff and little to no outside connectivity are not ideal candidates for thin or no imaging. Locations with low power stability or consumer-level equipment will typically balk at trying to keep up with the management or deployment server, causing the node to miss software installs or configuration changes--and this will likely end up exacerbating the problem.

Your working environment is always going to be an important step when deciding what method will work best. It may not be the preferred method or the one that adheres to the best practices, but it should be the one that will get the job done promptly and correctly.

A new computer is easy enough to setup and deploy using any of the above listed methods. Yet, what if this isn't a new computer? If it's an existing computer with a recently replaced HDD/SSD, then there are only really two viable options available: thin or thick imaging (since the "no image" method will not work without a pre-installed OS).

Ideally, deployment should be a one-size-fits-all solution. Yet, try as you might, the negligible differences between one method or another are only a small part of what makes up whether it will truly work as a solution to a deployment project or act as an obstacle to accomplishing this task.

Do you dislike a particular method listed above? Which is your preferred method of deployment and why? Sound off in the comments below.

Also see