Compatibility Choices: Platform and Device Guidance

DeveloperTown • Apr 24, 2015

Your world-beating idea needs a web app. Is it intended for just Windows users? Mac users? Mobile device users?

And/or, your world-beating idea needs a mobile app. What platform (iOS, Android, Windows, BlackBerry)? Which devices? Which OS versions?

(Hint: the answer to both questions probably isn’t “All of them.”)

Given a fixed amount of time and/or budget, the answers above and some techniques for shaping the problem space described below will go a long way to determining how resources are allocated to:

  • Design: Define an app with clean design and UX across browsers/platforms
  • Build: The code required to deliver on the promise of the design
  • Test: Verification of the design and code in target environments

Why does this matter? To frame it in terms of risk and opportunity cost:

  • You might opt to only support the latest and greatest browsers/devices (IE, Chrome, Firefox, Safari) to deliver more features more quickly. The downside risk is that if you discover later that a material portion of your target market uses older browsers/devices, the time, effort, and cost to go back and pick up compatibility can be high.
  • To avoid that risk, you might decide from the start to cast the widest net possible in terms of supported devices/browsers (including older, or very old, IE) to keep all of your potential options open. The downside risk is that you’re limiting the features delivered (in terms of feature count), experiencing slower software delivery, and experiencing a faster burn rate before you know if you're actually getting any value out of all this effort. If your target market actually occupied a much smaller slice of all possible browsers/devices, then the opportunity cost of "playing it safe" can be very high indeed.

The space between one extreme and the other is pretty wide. Figuring out where you’re best served along this continuum is important.

Here are some general guidelines to keep in mind that can help frame the conversation. They look at the problem space from four vectors that continuously change throughout the life of the product.

1. Target Market

  • Know your target users' preferred environment(s)
  • Focus the most energy on OSs/platforms/versions your research shows most valuable.

2. "Consumerized" Model

  • Focus on the latest shipping version and/or critical combination for your users
  • Eliminate combinatorial complexity

3. Certified/Supported Designations

  • Certified: assure it works on specific environments
  • Supported: asset general compatibility, but allow that there may be issue (which you may consider fixing at your discretion)

4. Usage Data

  • Before launch, use heuristics or comparable apps to drive decisions
  • After launch, make data-driven decisions

Each is described in more detail in the rest of this post.


1. TARGET MARKET

Sometimes, your target market and your app itself define compatibility. For example, if you want a web app for fortune 100 businesses that works with Microsoft SharePoint and leverages ActiveX controls, then you're automatically limited to Internet Explorer versions 8 through 10. Pretty straight-forward.

In the absence of a highly specialized market or user base, though, there’s a standard array of browsers to consider. DeveloperTown develops and tests on Mac Chrome, but there are other browsers and even platforms to think about.

The tables below show a general breakdown of internet traffic by operating system and list the most common browsers that you might consider when deciding how to allocate time for design, coding, and testing.

Based on North American browser statistics, the quarter ending 12/2014

General popularity and statistics aside, the most important question to ask when thinking about what you want to design for, build, and test for is: who are you targeting? Home Windows users have a different profile than cutting-edge technophiles; cutting-edge technophiles have different profile than a corporate Windows locked into a Windows XP environment (and IE8).

Mobile browsers is another matter altogether. Making websites mobile-friendly is mostly a solved problem, but you have to consider the different experiences, and constraints, between mobile and desktop. For example, you have to consider portrait and landscape orientations on mobile, and remember there are no hover-effects. You have to consider the much smaller phone form factor, and how it will impact touch screen controls. Phones have less horsepower than tablets, too, so pages need to stay lightweight to remain performant (especially as you go back just a generation or two with mobile devices). And if you want even light testing across desktop, tablet, and phone, it requires budgeting a lot of time and effort to do so.

Anticipating who will use your product, and targeting high-value environments, can be the difference between efficient use of resources for design, code, and test, or choosing too much or too little of one or more.


2. A "CONSUMERIZED" MODEL

To understand what "consumerized model" means in the context of compatibility, it's easiest to start by defining its opposite: the Enterprise Model.

The Enterprise Model
Enterprise software is software developed with a specific browser and OS ecosystem in mind. For example, if a company's IT department is developing an internal timekeeping application for the company’s employees, it will need to know the specific browsers and OSs the company supports (or is planning to move to) and design and build for that. So if the company has everyone running Windows 8 and IE11, that’s all a system is designed to work for.

A broader example might be a software product company that builds systems for large banks. Given the target banking market is going to be on Windows versions ranging from Windows XP to Windows 8, and Internet Explorer versions from IE8 to IE11, the software company might build a grid of OS and IE versions, designing and building with all applicable combinations in mind. This would leave out disallowed combinations (e.g. IE8 on Windows 7 or IE10 on Windows XP). Since users in many enterprise environments are barred from installing other browsers like Chrome and Firefox, the problem space becomes knowable and predictable.

A "Consumerized" Model
If an enterprise model is characterized by a tightly-controlled browser and operating system environment, a consumerized model has to deal with the opposite. When developing for consumers, especially when your consumers don’t have a very narrow demographic (which can translate into a narrow range of browsers and/or OSs used), the combinations of browsers and OSs explodes. They can, and do, use everything.

Firefox and Chrome, for example, are updating their version every six weeks. Imagine testing the last 20 versions of each browser on all supported Windows operating systems (XP, Vista, 7, 7.1, 8, 8.1, and soon 10) and all supported Mac operating systems (10.5.x through 10.10.x, with a new version shipped annually) every time you wanted to add or update a feature! And for simplicity, we're not even talking about testing against minor release versions or security updates (or Linux distros!).

One good way to set a boundary on the design/build/test problem space is to establish a "consumerized" model. The model calls for eliminating combinatorial complexity in favor of testing latest-version combinations only. Given that browsers are free, and (outside of OS-level limitations) can be installed by consumers at will, it's not unreasonable to only consider the latest shipping browser versions since users can upgrade to resolve issues they might have with older versions of the browser.

To re-visit the Chrome and Firefox example, this means:

  • New features/fixes are designed for, built for, and tested on the latest shipping version of the browsers as you go along.
  • Testing for prior browser versions is not required.
  • New version releases of browsers don't require regression testing for the entire app.

This approach also extends to operating system. For example, if a site functions properly and looks good in IE11 on Windows 8, we’re not going to also look at IE11 on Windows 7 or Vista. In like manner, we’re not going to check the latest version of Chrome on Windows 8, Windows 7, Windows Vista, and Windows XP.

This may not be possible for all situations, depending on the target market, but it should be the default mindset when scoping a project. While there's no guarantee that a specific combination of browser version/operating system version will work as well as the latest shipping version of both, the number of times it's not the case is very small--too small to spend the time and money to operate a continuous regression testing process.


3. "CERTIFIED" VS. "SUPPORTED" ENVIRONMENTS

Another way to narrow the problem space is to designate only high-value, must-have environments as Certified and lower- or uncertain-value environments as Supported. Here are working definitions of each term (though you can and should more specifically define them in your context):

  • Certified Things we've designed for/built for/tested with are considered Certified. We have high confidence it will work and keep working, and will give serious consideration to fixing issues reported against a Certified environment.
  • Supported Other environments that we don’t value as much or don’t know if we need to certify can sometimes be considered Supported. "Supported" means that we'll log issues people might report from the field from non-certified combinations or devices. Issues are put into backlog of fixes and enhancement requests, and given consideration for including in a future release.

Note that designating some environments that weren’t explicitly designed/built/tested for as Supported accomplishes two things:

  • It keeps users from being totally out in the cold if they happen to be running a non-certified environment. While not guaranteed issue resolution, it still gives them a chance to get help.
  • Most importantly, it's an avenue for feedback from actual usage in the live environment that might otherwise be lost. If there's a groundswell of issues reported from a combination we hadn't focused on during design/build/test, that feedback is incorporated when considering changes to the list of Certified environments.

It's important to understand that the semantics of Certified and Supported aren't a way to circumvent reality: you can't design/build/test for Mac Chrome and call IE8 "Supported." A better example of the "Supported" designation might be something like Certifying Windows Firefox (current shipping version, a la the Consumerized model) and Supporting older versions of Firefox. You expect the app will be fine on many older version of Firefox, even though you haven't explicitly revisited them.


4. USAGE DATA

Actual usage data should provide powerful guidance on what you design, build, and test for. This is often tough here at DeveloperTown, as most of DeveloperTown's projects are creating new software and therefore have no usage data to reference at the start of a project.

Depending on your platform and needs, there are several packages that can help deliver insight into the browser, operating system, and their versions your users are on when they access your app:

  • Google Analytics (web and mobile)
  • Mixpanel (web and mobile)
  • Heap Analytics (web and mobile)
  • KISSmetrics (web)
  • Flurry Analytics (mobile)
  • Adobe Analytics (web and mobile)

But what if you haven't launched yet?

Sometimes there are proxies available that you can use to model your probable user makeup. If you're replacing an existing solution, then you've got a fantastic source of data. Sometimes you can make educated guesses based on a competitor, or someone in a similar space to yours: what are they supporting, and how does that map to the expected overlap in your target market?

A final thought on usage data: you're never done examining it. As your users’ environments will change over time, usage data should be monitored in an ongoing fashion to make sure ongoing design/build/test effort is delivering value, or to make sure your users’ environments haven’t drifted into an area that you never designed or coded for.


To sum up, broad compatibility isn't fast or free. By knowing your target market, using a consumerized approach where appropriate, managing resources with Certified and Supported designations, and always looking to usage data to drive the evolution of compatibility work, you're well prepared to identify the highest value compatibility targets and be smart about your design, build, and test decisions.