Code and Algorithm Optimization
In most cases optimizing code and algorithms is many times more effective and far less expensive at boosting performance than upgrading hardware. It is not unusual for our Optimization Team to boost code performance by an order of magnitude after several days of review and updates. See our brochure for a quick overview.
SQL Query And Hardware Optimization
Some of our clients maintain databases with hundreds of billions of rows of data. We have designed SQL Server systems that perform as well with 50 billion rows as they do with 50 million. Making a query run quickly on very large datasets takes knowledge and experience. We will optimize your T-SQL, identify and resolve query bottlenecks and tune your hardware if necessary to ensure the fastest possible performance.
Algorithm Review and Optimization
A step up from regular code optimization, an algorithm review can yield tremendous benefit by avoiding work altogether. Many serialized procedures perform more work than is needed to accomplish a given goal. Our experts in Algorithm Optimization will look at your algorithms with a fresh eye and suggest solutions that can have meaningful impact on system performance.
Reading and writing files is a common requirement for systems dealing with large amounts of data. There are many ways to optimize the reading and writing of files depending on the type of problem being solved. The difference in approach can have tremendous impact on speed. One method can read a 1GB file into memory on a standard workstation in less than 1 second while another can take 30. Optimal file access methods, solution-specific in-memory structures and tailored file layout and indexing techniques can be combined to yield the best solution.
Most of the computing power breakthroughs of the past decade have come because CPU’s have not only gotten faster, they have multiplied. Unfortunately many internal applications written today still use a single core to do all their work. We will track down and identify every opportunity for parallel processing and suggest the optimal way to rewrite code to make full use of it.
Making code run in parallel is a non-trivial task. Depending on each circumstance, routines must be rewritten, data structures must be compacted and appropriately indexed and critical sections handled to avoid deadlock. We know how different chip-sets behave, the benefits and limits of hyper-threading and the ideal thread count for any given process.
Caching is commonly used to keep frequently referenced data in memory so retrieving it many times is much faster than going back to disk. Many programmers assume caching is handled for them by the hard drive or the CPU but these methods are generic and can be improved upon tremendously with the right approach. This is in part because regular system memory (DRAM) is hundreds of times slower than CPU Cache memory. Our optimization experts understand how to structure code and issue compiler instructions that will maximize the use of CPU Cache and cut processing time through a combination of advanced techniques such as data structure tuning, NUMA and processor affinity.
In-Memory Structure Optimization
It is common knowledge that working with data in memory is much faster than having to load it from disk. This makes memory a scarce resource as there is almost always less of it than your data demands. Being able to store more data in memory and doing so efficiently can be a key performance driver. We know when to use an array instead of a list or dictionary, when a hash table needs a custom memory allocator and when less data pre-loaded yields superior results. We will identify bottlenecks using a combination of profiling and critical code review and suggest the best way to remove them.