Built for Scale.

Consistent, predictable and scalable.

reliability hero image

Meet the A-list of Cloud Performance

DataTools Cloud services are more consistent, predictable, and scalable than anyone else
-
and we can prove it.

We have more server capacity than anyone else and a smooth scaling process mandates a minimum 40% redundancy buffer at all times – all built on the industry leading Amazon Web Services.

This ensures that our services are not only reliable but also deliver exceptional performance, even during peak usage times – which is why Australia’s leading companies choose DataTools for their address capture and verification solutions.

Reliability Cloud Services Graphic

Consistent performance at high standards.

iso9001 1

Enhanced Quality Management Certification

By implementing ISO9001, companies can demonstrate their commitment to quality, efficiency, and customer satisfaction.

DataTools and AWS

Robust and Reliable​

DataTools Cloud servers are housed at Amazon Web Services (AWS) state-of-the-art, highly-available data centre located in Sydney Australia.

autommatic health checks

24 Hour Monitoring and Automatic Health Checks

DataTools Cloud is monitored constantly through automatic custom built health checks that check the vitals of the servers every 3 seconds.

Engineered for
24/7 Reliability.

set and forget

Set and forget

DataTools Server Specialists will look after all server updates including security patches and data file updates without you or your IT team needing to worry about anything.

service commitment

Service Commitment (Financially Backed SLA)

DataTools uses commercially reasonable efforts to make DataTools Cloud service available with an Annual Uptime Percentage of at least 99.9% during the Service Year. 

load balancer

Intelligent Load Balancing Cluster Control

Automatically distributes and balances the incoming application traffic among all the servers running, improving the availability and scalability of the DataTools service.

An architecture prepared for growth.

capacity

Capacity

DataTools Shared Server Clusters contain a minimum of 20 large servers in the Amazon data centre located in Sydney. For increased reliability the servers are evenly split over multiple isolated Availability Zones connected through low-latency links.

updates

DataTools performs server updates with no downtime

DataTools follows a rigorous process to eliminate possible down time during updates.

scalability

Scalability and Auto Scaling

DataTools constantly monitors server traffic and performance to ensure the base cluster size is adequately handling the expected performance requirements with power in reserve.

disaster recovery

Disaster Recovery

In the extremely rare event where all isolated Availability Zones in the AWS Sydney data centre were inoperable. DataTools would enact its alternative AWS data centre disaster recovery plan.

Built for Scale

Improve Newly Captured Data

Enhanced Quality Management Certification

The ISO9001 standard is part of the larger ISO 9000 family of quality management standards, which together establish a comprehensive approach for organizations striving to achieve excellence in their operations. By implementing ISO9001, companies can demonstrate their commitment to quality, efficiency, and customer satisfaction.

 

Robust and Reliable

DataTools understand that reliability is of the utmost importance. DataTools Cloud servers are housed at Amazon Web Services (AWS) state-of-the-art, highly-available data centre located in Sydney Australia.

24 Hour Monitoring and Automatic Health Checks

DataTools Cloud is monitored constantly through automatic custom built health checks that check the vitals of the servers every 3 seconds. If a server is determined to be unhealthy it is automatically removed from the cluster and replaced by a new server image. 

In addition to the automatic health checks within the data centre two external independent providers Pingdom and Uptime Robot are also set to not only monitor the service availability but also the service performance.

Set and forget

The service is fully maintained for you by DataTools. DataTools Server Specialists will look after all server updates including security patches and data file updates without you or your IT team needing to worry about anything.

Service Commitment (Financially Backed SLA)

DataTools uses commercially reasonable efforts to make DataTools Cloud service available with an Annual Uptime Percentage of at least 99.9% during the Service Year. In the event DataTools Cloud service does not meet the Annual Uptime Percentage commitment, you will be eligible to receive a Service Credit.

Intelligent Load Balancing Cluster Control

The Intelligent Load Balancing Cluster Control automatically distributes and balances the incoming application traffic among all the servers running, improving the availability and scalability of the DataTools service.

Capacity

DataTools Shared Server Clusters contain a minimum of 20 large servers in the Amazon data centre located in Sydney. For increased reliability the servers are evenly split over multiple isolated Availability Zones connected through low-latency links.

DataTools performs server updates with no down time

DataTools follows a rigorous process to eliminate possible down time during updates. 

The process begins with the creation of a new “Pre-Release” server image – based on the current production image – that is isolated from the production servers. The Pre-Release server is then updated with all the latest server and security patches followed by the installation of the DataTools software and data file updates. 

The server is then put through a thorough testing and QA phase before being given the OK for deployment into DataTools Dev\UAT environment for customer testing for a pre-release period of 1 week. 

The deployment process into the DataTools Production environment involves the creation of a new server from the Pre-Release image for every existing server in the Production Cluster. 

Once all the new servers are up and running, they are added to the Production Cluster, for a while doubling the cluster size. The old servers are then retired from the Production Cluster leaving only the new servers with no down time experienced by the users.

Scalability and Auto Scaling

DataTools constantly monitors server traffic and performance to ensure the base cluster size is adequately handling the expected performance requirements with power in reserve. Pre-empting busy times of the day, week and year. 

In the event of average CPU usage across the cluster rising above 60% for more than 10 minutes, the cluster will automatically launch additional servers. This process will continue until the cluster’s average CPU usage over a 1-hour period drops to below 15%, at which time a single server will be removed from the cluster (while maintaining an average CPU usage below 60%) until the cluster returns to its base cluster size. 

Auto Scaling enables the DataTools service to scale and handle instant changes in requirements or spikes in popularity reducing the need to perfectly forecast traffic.

Disaster Recovery

In the extremely rare event where multiple isolated Availability Zones in the AWS Sydney data centre were inoperable causing all DataTools servers to go offline, DataTools would enact its alternative AWS data centre disaster recovery plan. 

In preparation for this plan, DataTools stores securely dormant copies of the latest DataTools server images in an offshore AWS data centre ready to be started in event of complete data centre failure. This would allow DataTools to bring the service back online even without the primary AWS Sydney data centre being available.