Selling Your Tech Company — Surviving Due Diligence

[Warning - long post.  I thought about breaking it up into several posts, but decided to just go ahead and post it.]

I ran across an interesting article:  Google’s Rules of Acquisition: How to Be an Android, Not an Aardvark .  In this article, it details some of what Google specifically looks for in an acquisition these days.  Interestingly, companies who acquire other companies often end up doing worse than their competitors.  Although they’ve had their exceptions, one company that has done well with acquisitions is Google.

Read the article for details – but one point stands out.  Google looks for companies that have a vision which is closely aligned with their own.  When both groups are heading in the same direction, it makes everything easier.  The article has a list of 13 rules at the end – again, I encourage you to read it for yourself.

Today I want to talk about a couple of issues not covered by the article, specifically related to the merger/acquisitions of technical companies.

When a company is sold, or sometimes even when a company is being considered for a large contract, the purchasing/contracting company will conduct “due diligence” designed to discover and quantify any issues or risks.  In virtually all cases, this will include an audit of the financial statements and condition of the company.  Of course Sarbanes-Oxley  inserts more “fun” into financial disclosures and auditing.

In the case of a tech company, there is an additional layer of due diligence related to the products and operations of the company.

Any intellectual property, especially patents, will be examined carefully as to completeness, viability, practicality, enforceability, etc.  The value assigned to the intellectual property will vary wildly depending on these examinations.

Products and Applications

A prospective suitor will want to quantify the value of your products and applications, as well as your ability to support and create additional products in the future.  In addition to the aspects of the products themselves and their sales history and potential, they will be looking for control, consistency, and maturity.

Any use of open-source software will be examined.  Typically use of open-source development tools and operating systems are fine, but the licenses for each tool and OS will be examined to make sure there are no limitations such as the license disallowing use for commercial (or in some cases military) products.  Use of embedded open-source code, such as libraries or functions incorporated into the products will be examined much more closely.    Many open-source licenses such as GPL require releasing any modified source code along with your product – a provision often seen as unacceptable.  Licenses for embedded commercial software will be examined as well, but rarely present the same issues.  In preparing for due diligence, it is wise to have examined all of the potential issues in advance, and to have addressed any that might present a problem.  Showing that you understand the issues will help to reassure the purchaser/client.

Development Processes

While startups are not expected to have audited CMMI Level 5 policies and procedures in place, the more mature and solid the development processes are, the more value that the acquiring company will assign to the products and employees of the startup.

The primary goals of the development processes are to ensure consistent quality, repeatability and productivity.

There are a number of industry standard processes.  Most can be broken down into two major groups: waterfall (in various variants and configuartions), and Agile (again, with lots of variants).   In addition there are lots of tools, such as UML, database design and modeling tools, load generation tools, source control tools, etc.  Agile methodologies are flexible and lightweight, especially suitable for in-house high-trust environments with lots of releases.  Waterfall variants are often suitable for lower-trust environments, such as vendor/client development.    I have worked with both successfully, but there are a number of items you must handle carefully in either case.

Documentation

One of the key items that will be looked at is documentation, including comments in the code itself.  The amount, quality and specifics of the documentation will vary widely based on the development processes.  Smaller groups can use significantly less documentation than larger groups that need to coordinate multiple efforts into a single project or product.

As I wrote in my free book Effective Development, all documentation should satisfy one or more of five purposes:

1)      Make the project run more smoothly

2)      Contribute to the quality of the finished product

3)      Facilitate long-term maintenance

4)      Improve communications

5)      Increase customer satisfaction

One should be careful about requiring too little or too much documentation.  If a reasonably skilled team can recreate the software from nothing but the documentation, and if there is enough detail for a reasonable team to maintain the software over the long term, you probably have about the right amount of documentation.

Too little documentation increases corporate risk, especially in the case of acquisitions.  Key people may find themselves in improved financial condition and desire to leave before they have completed solid knowledge transfer to the acquiring company.  In general, too little documentation also raises risk for the startup itself – if the employee wins the lottery (or gets hit by a bus), too little documentation can lead to major problems for the company.

On the other hand, startups need to run “lean and mean” as much as possible.  Excessive documentation is a level of inefficient overhead that cannot be afforded by a startup.  It is a fine line to walk, but walk it we must.  The line will also be a moving target as the startup grows – larger teams require more documentation than smaller teams.

Quality (Quality Assurance, Quality Control)

QA/QC procedures will be examined.  The acquiring company will want to make sure of elements such as:

1)      There’s no lurking undiscovered horde of bugs about to attack

2)      QA/QC is given a high priority

3)      Developers build in unit testing, preferably fully automated

4)      Integration / systems testing is conducted, preferably using a mixture of automated and manual methods as well as whitebox and blackbox testing.

5)      Good metrics such as find/fix rate are readily available.

6)      Solid prioritized and updated bug lists

IT infrastructure

While the acquiring company is not likely to expect full audited compliance to ITIL 2011 (or equivalent), they will judge the professionalism of the company and assess or subtract value based on what they find.

If you can’t measure it, it doesn’t exist

One thing the acquiring company will appreciate is solid metrics.  The more you can quantify the important parts of your operations and report against those prioritized metrics over time, the more you will show that you are handling your operations in a professional manner.

It is an interesting dichotomy.  While they are buying a “startup”, they want to see everything squared away as if you were a mature organization.  They do not want to buy problems.  They want to buy solutions, ideas, and people who will fit well into the goals and structure of the acquiring company.

Backups and disaster recovery

Backups and disaster recovery often mark the difference between a mature organization and a brand new one.  It is common for Version one of a product to be created by a lone developer working on their local computer with no real backups.  As the company matures, having good backups of everything becomes more important – including backups of production, source code, configurations, etc.  Also important is to have “disaster recovery” plans in place.  They don’t have to be extensive, but not having them in place and tested is a red flag.  For many small groups it can be as simple as sending a copy of their backups weekly or monthly to an offsite location, preferably in a different state.

Desktop and internal support and provisioning

In the beginning, a startup has everyone doing their own desktop computer purchasing and support.  This avoids overhead early on.  As the organization matures, it is typically helpful to establish a person or small group to manage all of the desktop/phone/copier resources.  This allows each person to focus on the tasks that they do best, and which add the greatest amount of value to the startup.  You do not want the CEO doing $10/hour tasks – you want them to be focusing on the $10K/hour tasks.  The acquiring company will examine how this person or small group operates.

Web hosting

In the early days, companies often hosted their own servers.  As the standards and expectations for websites became more demanding, the costs associated with hosting their own servers increased, often beyond that which was reasonable.  By the time you include power, cooling, server costs, networking costs, backup provisioning, backup power supply including long term (typically diesel generator), multi-homed fast internet connections, etc. the costs often exceeded that which you could get by going to a company who specializes in hosting.

Many startups begin with inexpensive hosting, such as GoDaddy or DreamHost shared hosting.  This is a reasonable use of funds early in the lifespan of the startup.  Such hosting has serious limitations over the long term in terms of robustness and ability to handle significant loads.

The next tier is to go with some of the dedicated server offerings from these same companies.  While this can help, there are often struggles with network related issues because they are typically using the same networks as the shared hosting.

There are a huge variety of inexpensive hosting companies, some significantly better than others.  For example, as of this writing, I have had some great experiences with GatorHost.com.  However, this can change rapidly.

There are also a large number of premium hosting companies – but unfortunately, nothing comes for free.  The premium companies give great service and have great facilities, etc., but they charge premium rates.  This can be worth it, but needs to be carefully investigated and managed.  It is easy to throw money at a problem – but sometimes we think the problem is with the hosting company and it turns out to really be with our applications or architecture.  Unless there are solid metrics showing that the hosting company is at fault, we must be extremely careful about how we proceed.  There are lots of reasons that two servers might stop communicating beyond just “network issues.”

Two premium companies I recommend at this time are RackSpace.com and CalPop.com.  Both are great, but both cost more than the lower cost hosting companies.  You get what you pay for – premium companies have multiple backup generators/failover, multiple core internet connections, etc.  Also the type of service and support you get from these companies is truly amazing.  I like CalPop especially for colocation of my own equipment, and RackSpace especially for a managed private “cloud”.

“Cloud” hosting is a poor term, ill defined, and varying wildly depending on whom you talk to.  For example, I have a single hardware box sitting to my left that is running 7 virtual servers configured in a fault-tolerant configuration with automatic failover and recovery.  Some might call that a “cloud”, but I never would.  One box does not make a cloud, but can be great for development and testing.  However, as long as you get people to clearly identify what they mean (which can be difficult), using cloud services can be an interesting alternative.  The appeal of services that feature instant scalability, robust multiple instances with failover, etc., can be great.  There are a couple of issues, even with large mature cloud hosting such as offered by Amazon.  First, the costs are typically quite high for a similar level of performance from discrete hardware.  This leads people to often use Amazon only for their file storing services, not their complete virtual host solutions.  I’ve used their solutions and they are nice and easy to set up and use – just pricey.  Second, even being a “cloud” does not guarantee perfect uptime and communications.  Even Amazon experienced a significant outage on April 21, 2011 – with some major sites affected for as long as 11 hours.  When all is said and done, even huge “clouds” consist of servers, networks, databases, etc., and can have issues and problems.

One thing that can help is to use a CDN (content delivery network) which is a set of servers deployed in front of your site that basically caches your site.  Akamai is a prime good example of a CDN provider.  They have a huge number of servers (over 15,000 I believe), placed in locations all around the world, which provide a local front end to your site.  This also helps your site withstand even intense DDOS attacks, in that even if the jerk hackers succeed in bringing down a local Akamai server, your site is still available solidly through the rest of the world.

Web architecture

The architecture and configuration of your web presence directly affects two principle issues:  performance and robustness.

The performance of a given application can vary over a huge range, depending on how it is set up and configured.  For example, I came onto one project where the database was responding very slowly.  The company was about to buy a much more expensive database to compensate.  By analyzing the queries and adding a couple of simple indexes, we were able to speed up the database response by three orders of magnitude (1000X), and the need to spend a bunch of money was averted.

There are times when it is appropriate to bring in additional computing resources – but one should always be sure that the applications have been reasonably optimized first.  For many developers, bug fixing and optimizing is not as “fun” as adding new features, but over the long term both are crucial.  Sometimes optimizing involves restructuring and refactoring in order to process loads and lessons learned through the early release.  Profiling your application, especially during load/stress testing, can be an invaluable way to understand what you need to focus on in order to increase the speed of your application.

Robustness can also depend on a number of factors, and can be measured in a variety of ways depending on the application.  In its most basic form, it means the server has not crashed or frozen while executing the application.  Other considerations include degraded performance or functionality, etc.

High availability computing is tied directly to robustness.  There are a number of different models and ways to describe high availability.  The most common is in terms of “nines” – what is the uptime of your systems, expressed as a percentage, for a growing number of nines.  Specifically:

A typical year has 525,600 minutes.  (leap year adds a day)

One nine – 9% – the system is available for 9% of the time.  This means it is down 7,971.6 hours per year.
Two nines – 99% – down 87.6 hours per year
Three nines – 99.9% – down 8.76 hours per year
Four nines – 99.99% – down 52.56 minutes per year
Five nines – 99.999% – down 5.26 minutes per year
Six nines – 99.9999 – down .53 minutes per year (about 32 seconds)

There is a lot of game playing when it comes to these numbers.  Some people exclude “planned” downtime, such as upgrades.  Some people exclude all service reductions and only count full outages.  I prefer to pre-quantify when a service degradation actually qualifies as an outage, and to build my systems such that they can have an upgrade applied without any downtime.  In other words, in my world, I prefer to not allow ANY “planned” outages if possible.

Allowable outages are a business decision.  It can take a good deal of money to increase your uptime by a full “nine.”  This is especially true when moving above 4 nines.  (I list 6 nines – but almost nobody has that as a goal) However creative solutions can often mitigate much of the expense of large solutions.  For example, if your system has acceptable performance running on a handful of physical servers, by virtualizing your servers and spreading them out among your handful of physical boxes, with some redundancy and failover, you can often gain one to two nines from a lower starting point with little or no hardware expense.

Having a solid testing environment and procedures can be critical in increasing both robustness and speed.  Again, however, virtualization can be key to containing costs while still allowing good evaluation.  Virtualization does not need to be expensive, even for startups.  Personally I really like Xen virtualization from Citrix.  It comes in different levels, including a free version.  The free version is wonderfully powerful, and is adequate for many, many applications.  If you set up your systems properly, you can even apply load testing against your server setup and catch many conditions that only show up under load.  Sometimes it is necessary to load test just parts of your systems at any time, but that is better than nothing.  Sure, a replicant of your production systems would be ideal – but we live in a real world with constrained funding, and need to act efficiently and prudently. 

Security

How well are your internal networks protected from outside attack?  For a web architecture, how well do your systems prevent bad effects from common attacks, such as

  • DDOS (distributed denial of service)
  • SQL injection
  • Cross site scripting
  • Cross site request forgery
  • Back door
  • Spoofing
  • Man in the Middle
  • Replay
  • TCP/IP Hijacking
  • DNS poisoning
  • Weak keys
  • Brute force
  • Dictionary
  • Buffer overflow
  • Sniffing
  • Ping of death
  • Port scanning
  • Fragment attack
  • And many others 

In addition, if your company processes credit card orders itself in any fashion, as opposed to using an external third party for all transactions, you will also need to show PCI compliance from an external audit.  These audits can be very intensive, but often show items which are good to correct.

Conclusion

With preparation and work, it is possible for a startup to get past a deep technical audit and due diligence.  While it would be nice to be able to have everything in place immediately, this rarely happens.  Typically because of funding considerations, time to be spent on the task, and other factors, it is best to set up a plan so that at the point that the company is expecting to have to pass due diligence, it can sail through easily.  Prioritizing the steps in the plan, and carefully controlling expenses will keep your startup from spending too much money too early in the process, as well as promoting innovative solid solutions that are a fraction of the cost of massive “throw money at the problem” efforts.

Tags:

Leave A Reply (No comments so far)


+ nine = 17

No comments yet