Ditching Windows: Here’s How Ubuntu Updates Your hard drive And Why It’s Better

Making the alternate from Windows to the Linux-based desktop os like Ubuntu has presented minor challenges, pleasant surprises or even a certain a higher level catharsis. As I close in on one month using Ubuntu as my daily driver, I’m going to detail because this journey as you can for others considering getting one done. This time around it’s a look at what ultimately pushed me off Windows: updates!

Updates on Windows and Ubuntu can be found in many forms. You might have security updates, feature updates and software updates among others. If you’re someone who’s ever entertained and also being ditching Windows for Linux, probably Windows’ aggressive update behavior works as a primary reason.

Microsoft’s system update policy has reached a place where it’s implementing artificial intelligence to guess every single user is far from their PC to ensure Windows can reboot and use the latest updates. At the time i wrote in which so many people said “hey, why don’t you consider just letting a person’s in front of the PC make that choice?”

While Windows 10 does give users some quantity control over must update, still has a nasty practice of forcing a reboot upon us where work can be lost if and when diligent about rebooting. Or genuinely results in a headache and lost time.

So that’s been one of the most refreshing areas of using Ubuntu to date. No forced reboots, no aggressive update nags. A good deal the majority of the software you’ve linked to your own (via included Software Center or Snap Store) updates right alongside the body.

But as a Linux novice I didn’t figure out was happening behind the curtain from a technical standpoint, therefore i reached seem Will Cooke, Engineering Director for Ubuntu Desktop, to shed some light on this particular.

It makes sense first of all something important like software security updates. Most are updates that occur a result of potential security flaws discovered in existing software. “These updates are automagically downloaded and utilized for the background hardly any user interaction because we consider them over to be so important,” Cooke says.

Within the area . affected application features been used, Ubuntu simply waits through to the software is closed. Once it’s reopened the revolutionary version is loaded. No nags, no interruptions, no stepping through multiple dialogue windows. It’s essentially an invisible way to the user.

I discussed Snaps with my last two articles about Ubuntu, and even for good reason. They’re one-click installs of popular software (like OBS, Skype, VLC, Telegram, Discord, etc.) which the operating system itself manages updates for. Updates because of apps tend to be rolled out automatically, meaning it’s not essential that you open and update each one separately. In rare cases they will require a reboot that need be fully installed, but Ubuntu never forces hard to recognize.

Livepatch: Windows Needs This

Cooke is most very proud of how Ubuntu handles crucial updates for the kernel. On a Linux desktop, the kernel is basically the computer minus the graphical interface. Among other tasks it manages your hardware, memory, disks, user accounts, when software runs and just what permissions this has. So kernel updates absolutely are a big deal. “These are likely to be low-level driver fixes who do require a reboot with the intention to take effect,” Cooke explains. That’s as these modules and fixes are loaded up at boot a while and not again later.

What if there’s a critical security update that will be issued to your kernel? “In some cases will also be possible for us to share a ‘Livepatch,” Cooke says. “The Livepatch software downloads the ultra-modern code from a servers and that can apply it to a running machine.” What’s that mean? Essentially that your potential computer currently is protected against that security bug without the need to reboot. “It’s fixed even on a live running machine. Magic!” Cooke says.

Even Reboots Be more effective

Even the act of rebooting the human body for updates is more preferable and faster on Linux. Windows asks you to wait mainly because it applies updates (sometimes while both shutting down and during restart), while Ubuntu just, well, restarts. If updates are increasingly being applied rrt is going to only take seconds, and I’d argue the normal user wouldn’t even feel the time it takes.

The takeaway for my part is that Ubuntu (and certain other Linux distributions, though I haven’t dabbled to have them yet) handles system-wide updates a lot more elegantly than Windows does. It besides handles both your 3rd-party software along with your low-level operating system updates, even so it absolutely never can make you reboot. While in certain crucial situations where an imminent security bug occurs, Ubuntu can patch that seamlessly with out needing to request a reboot.

This tends to mean different things to different people. In my closet, it means I feel more focused much less annoyed when I’m face-to-face with my machine and writing, working, researching or just playing around. Also me, that’s what matters most.

A lot more for more updates on my own Linux journey. Whenever you have questions merely want to talk about this article, reach out to me using the social media links below.

Windows scores victory over Linux as another state decides to interchange

The German state of Lower Saxony is focused to follow Munich in migrating 1000s of official computers out of Linux to Microsoft’s Windows.

As initially reported by Heise, the state’s tax authority has 13,000 workstations running OpenSuse — that it adopted in 2006 in a very well-received migration from Solaris — so it now chooses to migrate with a “current version” of Windows, presumably Windows 10.

The authority reasons that lots of its field workers and telephone support services already use Windows, so standardisation is just common sense. An upgrade of some sort or other would naturally be necessary soon, being the PCs are running OpenSuse versions 12.2 and 13.2, neither which in turn is supported anymore.

Among the Lower Saxony’s draft budget, €5.9m is scheduled aside for that migration in the coming year, which includes a further €7m annually over the next years; it is really not yet clear what amount of years the migration would take.

“The unification of existing workstation systems will simplify procedures and facilitate software development for a KONSENS network,” a spokesperson relating to the Lower Saxony finance ministry said, discussing the German states’ project — now over a decade old — for standardising the IT systems utilized their disparate tax authorities.

The spokesperson added that it was too early now to provide a timetable for Lower Saxony’s migration, in conjunction with a more detailed framework wouldn’t be available right before the end of the season, due to the complexity using the task.

The explanation of staying with one system was also deployed in Munich, and the city council finally authorized its long-awaited switch from Linux to Windows in November a year ago. Munich’s shift apart from LiMux — the city’s own Ubuntu-based distribution — is anticipated to could cost more than €50m overall, within deployment of approximately 29,000 Windows-based computers.

The Munich decision came to be by the city’s ruling coalition of your centre-left Social Democrats (SPD) and centre-right Christian Social Union (CSU), a party that only are operating in Bavaria, and that is the long-running junior partner to Angela Merkel’s Christian Democrats (CDU).

Again, Lower Saxony is controlled by an SPD-CDU coalition, was formed approximately with an agreement that included turning the state’s back on Linux. Other administrative departments there, which includes the police, are already using a Windows 8.1-based client involving a local company.

Lower Saxony’s tax authority will conduct a cost-benefit analysis for the migration. The Free Software Foundation Europe (FSFE), which will be highly critical in the decision to convert away from Linux, welcomed the procedural formality, but program manager Max Mehl said it was important that you keep an eye on who conducts the analysis.

The Munich migration followed the recommendations of a report from consultants at Accenture, a Microsoft partner.

“It has already been apparent that desired consolidation inside the IT landscape it can in the wrong direction,” said Mehl. “Instead of bringing the chance to expand the old infrastructure of Linux systems, the state of hawaii voluntarily goes back into a cage of artificial dependencies from individual manufacturers.”

Mehl pointed to minimize Saxony’s neighbour, Schleswig-Holstein, to the example of the far more “future-oriented IT strategy”. Schleswig Holstein happens to be governed since not too long ago by a ‘Jamaica’ coalition of this CDU, the Greens, plus the liberal Free Democrats (the party colours the fact that match the Jamaican flag), which decided a few weeks ago to go in entirely the exact opposite direction, abandoning Windows cost-free software.

Germany’s Open Source Business Alliance also sees an overall trend of greater open source adoption, said board member Holger Dyroff, who noted a state of Thuringia also recently adopted an incredibly strategy.

“With equal regret found . accept that others go an alternate route throughout their particular project. Genital herpes virus treatments clearly dislike is often that such projects discover a method to not focus on the question of the points the user requirements are, but manage to set [as a target] the specific application maybe a certain os without functional and expense analysis, in praise of consolidation,” Dyroff said.

“When we are purchases other goods, apart from ask for cars from the local particular brand to consolidate the fleet, if hydroponics gardening requirements as well as a price,” he added. “This is proper competition for public projects.”

SQL Server Backup Strategy

Microsoft SQL Server is often one of the most critical applications using an organization, with enormous numbers of uses to count. Automobile criticality, your SQL Server and it is data has to be thoroughly protected. Business operations be reliant upon a core component like Microsoft SQL Server to treat databases and knowledge. The importance of storing this server and ensuring you do have a recovery plan set is tangible. People want consistent Presence of data. Any lack of critical application Availability might decreased productivity, lost sales, lost customer confidence and potentially little customers. Does your internet business have a recovery plan secure to protect its Microsoft SQL Server application Availability? Has this treatment solution been thouroughly tested?

Microsoft SQL Server works on the backend of the critical applications, which makes imperative to enjoy a strategy set up in case something happens to your server. Veeam specifically has tools to save your SQL Server and restore it when needed. Veeam’s intuitive tool, Veeam Explorer for Microsoft SQL Server, is actually simple to use and doesn’t call for be a database expert to quickly restore the database. This website post aims to discuss using these tools and just what Veeam can offer that will help ensure your SQL Server databases are protected and always available to your business.


There are some things i suggest you take note of when exercising on Veeam to go back your Microsoft SQL Server. A real aspect easy way to keep the backup is consistent is to always check that application-aware processing is enabled for your backup job. Application aware processing is Veeam’s proprietary technology according to Microsoft Volume Shadow Copy Service. Fraxel treatments quiescences the applications running on your virtual machine to manufacture a consistent examine data. This can be done so there work just like unfinished database transactions any time a backup is completed. This technology provides a transactionally consistent backup in a running VM minimizing the opportunity for data loss.

Microsoft to integrate Windows Server 2019 with Azure cloud NAS

Microsoft has signalled Windows Server 2019 are able to offer hybrid storage features that might encourage users in order to place more data into its Azure public cloud less onto on-premises storage arrays.

News from the company’s intentions popped up at the end of an article announcing all around availability of Azure File Sync, an alternative tool that’s easily understood as mashing up sync ‘n’ share services which includes Dropbox and OneDrive using a network attached storage device.

Azure File Sync will replicate data from Windows Server to Azure in order to other servers around an organisation’s fleet. If your data is in Azure, Microsoft’s cloud can start to perform automatic storage tiering to be sure that, in Microsoft’s words, “to store merely hottest and the majority recently accessed data on-premises”.

That’s the actual job that dedicated storage arrays or software often performs today.

And Microsoft’s flagged that there’s more coming.

“We have a whole a number of new features and incremental improvements present throughout the summer and fall, including support for and tighter integration with Windows Server 2019,” wrote Tad Brockway, Microsoft’s gm for Azure Storage and Azure Stack.

That last point – tight integration with Windows Server 2019 – delivers on multiple Microsoft plans to make its next server OS more Azure-friendly.

That’s not unexpected, given Wall Street’s keen a fixation with recurring revenues and past Microsoft acquisitions that relate it is very excited about cloud storage: it bought cloud storage gateway outfit StorSimple next year and added cloud NAS concern Avere as a result of 2018.

Microsoft is also cooperating with partners for cloud storage. It recently announced a preview of another service named ‘Azure NetApp Files’, which takes NetApp’s ONTAP file system and runs it natively in Azure.

The concept is to give NetApp users an overcast option that allows them to run a single logical pool of storage spanning on-prem and cloud, but that behaves becoming a NetApp device.

Among Microsoft’s other cloud storage services is most likely the ‘Azure Site Recovery’ service that takes snapshots of virtual machines, puts them in Azure, then does failover to the cloud if on-prem servers fall over.

Snapshots have long been NetApp’s party trick. So Microsoft is both supporting they and challenging it. It’s also signalling that Windows Server will be able to function as a hybrid NAS and talking up improvements on the Storage Spaces software-defined storage for on-prem inside Windows Server 2019.

Windows 7’s impending EOL triggers Windows 10 charm offensive

Microsoft Is starting an onslaught at the biggest an important part of its market place – enterprises that haven’t changed to Windows 10 yet and are generally languishing on Windows 7.

The main system – which definitely needs over 43 percent of the market, mostly because of the reluctance from enterprise users to upgrade – is related to reach end of life in January 2020, and most are still wet behind the ears using the end of XP support.

Microsoft intends to bang the drum as a result of utopian vision found in a world where everyone uses Windows 10, Windows Cloud and Office 365 through out this year, by having a harsher, more ‘Protect and Survive’ tone throughout the final year which means the end.

‘End-of-life’ for any less-initiated is the point where Microsoft stops supporting products, meaning less bug-fixes, security patches or new functionality, making any user – personal or enterprise – a bit more susceptible to malware attacks.

The less successful Windows 8(.x) has witnessed its market share decimated since Windows 10 launched and does not take much to get rid of when the time comes, so Microsoft is actually concentrating on the mountain of malcontents that do not want to risk borking proprietary software, drivers not working on older equipment, a big upsurge in data collection and all the other fun stuff we escort Windows 10.

It’s believed it won’t attempt Microsoft executives which can be being required to help, with OEMs and resellers being roped within try and talk IT managers outside downgrading to Windows 7, a practice which is still happening where being nervous about Windows 10 is rife, both amongst IT professionals as well as the wider community.

When Microsoft windows was substituted with Vista programs 2006, stopping huge problems because of almost every device requiring a custom written driver as compatible with the fledgeling OS.

Although it’s unlikely that it’s going to happen again – particularly with Windows-as-a-Service meaning continuity for quite a few considerable time, even so still a risk of disruption in offices where Windows 7 is definitely doing fine raising the perfectly reasonable age-old question – regardless of whether ain’t broke… why repair it?

Microsoft is already starting pull up the drawbridge, after announcing it will no longer manage forums for older versions of Windows.

Backup and Restore operations with SQL Server 2017 on Docker containers using SQL Operations Studio

From this 18th item of the series, analysis discuss the concepts of database backup-and-restore of SQL Server Docker containers using SQL Ops Studio (SOS). Before proceeding, make use of this Docker engine installed and SQL Ops studio configured as part of your host machine.

This article covers what follows topics:

Overview of SQL Operations Studio (SOS)
How that can be used SQL Ops Studio integrated terminal
Definition of Docker containers
Step by step instructions to initiate backup-and-restore of SQL Server 2017 Docker containers while using SQL Ops Studio interface
And more…

SQL Ops Studio

Microsoft launch of new light-weight cross-platform GUI tool towards SQL Server Management umbrella is referred to as SQL Operations Studio. SQL Operations Studio is regarded as a cross-platform graphical user interface for dealing with SQL Server instances.

Feature Highlights

It provides cross-platform support to address SQL Server databases for Windows, mac, and Linux or Docker containers on any platform
SQL Server Connection Management that supports
Connection Dialog
Server Groups creation
Azure Integration
Create Registered Servers list
It they can double to connect to Microsoft’s cloud databases, including Azure SQL Database and Azure SQL Data Warehouse.
In-built Object Explorer support
Advanced T-SQL query editor support
New Query Results Viewer with data grid support
The result-set may be exported to JSON\CSV\Excel
Derive custom charts
Manage Dashboard using standard and customizable widgets and insights
Backup and Restore database dialog
Task History window to evaluate the task execution status
In-house database scripting options
In-built Git integration
In-built shell support through an integrated terminal options
And more…

The listing continues…

I would recommend proceed to download the version on your platform of options to see how it operates.
Docker containers

Docker carves up a running system into small containers, because both versions is sealed, segmented, boasting own programs and isolated from anything different. Docker’s mission will be to build, ship, and run distributed applications across anywhere the sensation you get any platform. It is normally on your Local Laptop, or on the Cloud, or On-premise servers

Containers are highly portable programs as well as kept no more than possible, and don’t currently have external dependencies
It is always easy to create a Docker image following move or copy it completely to another and be sure it’ll still work within the same way.
To run SQL Server inside a Docker container
Docker Engine 1.8 or over
Minimum of two gigabytes of disk drive space to maintain the container image, combined with two gigabytes of RAM

How to get started

Let’s start SQL Operations Studio and open the interactive terminal.

First, let’s download the most recent SQL Server 2017 extract via the docker hub. To extract, run the docker pull command aided by the SQL Server 2017 image-tag. It is also simple to extract areas Docker container images inside the docker hub repository. To receive the latest SQL image, enter the word “latest”, the tag, following your colon. This us the newest SQL Server 2017 image.
[root@localhost thanvitha]# docker pull microsoft/mssql-server-linux:latest

Now, run the docker image using docker run command.
[root@localhost thanvitha]# docker run -e ‘ACCEPT_EULA=Y’ -e ‘MSSQL_SA_PASSWORD=thanVitha@20151’ -p 1401:1433 –name SQLOpsBackupDemo -d microsoft/mssql-server-linux:latest

The SQL Instance is prepared to accept the connections. Next, to attach to the SQL Instance, go through the Server icon at the left corner of a window.

Add some connection details

Enter the Internet protocol address ( inside the host machine combined with the incoming port number, web page ., 1401.
Enter the SA login credentials
Click Connect

Next, adhere to the below steps for saving the databases

To accomplish the backup task, right-click the database decide on databases manage window.

Such as, right-click on SQLShackDemo database and decide manage. Contained in the database dashboard pane, we have some useful information maybe even including the current recovery model, the end time backups were performed on a database and log backup, along with database’s owner account

Now, let’s proceed to click the backup icon. Another window would to appear where we are able specify a backup name. SQL Operations Studio suggests the database that reference today’s time and date.
Let’s go ahead and decide on the type of backup, however, its full backup type.
The backup file location, so, it’s displaying even a full path that is certainly relative to the Docker container. We also set the settings using advanced configuration options.

Press Backup button to initiate the backup task.

Now, you can observe, the sidebar is modified to the task-history scene on the left. You can examine the statuses of one’s backup job here.

When you’re done considering that, you’re able to switch retrace to your server sidebar. Connect to the SQL Container and open the interactive bash terminal using docker command to confirm the backup file that got made from the SQL Ops Studio backup dialog.

[root@localhost thanvitha]# docker exec -it SQLOpsBackupDemo bash

root@cc8f1beae1e1:/# ls -l /var/opt/mssql/data/*.bak
-rw-r—–. 1 root root 434176 May 25 14:26 /var/opt/mssql/data/SQLShackDemo-2018525-10-24-39.bak

Now, let’s dig back into the second organ of the process.

You need to do database restore, I shall be instantiating a new SQL instance SQLOpRestoreDemo applying the following docker run command.
[root@localhost thanvitha]# docker run -e ‘ACCEPT_EULA=Y’ -e ‘MSSQL_SA_PASSWORD=thanVitha@2015’ -p 1402:1433 –name SQLOpRestoreDemo -d microsoft/mssql-server-linux:latest

Let’s copy the backup file to host machine by navigating for the backup file directory. Now, copy the backup file out of your host machine to another SQL docker container with all the following docker cp command.
[root@localhost thanvitha]# docker cp SQLOpsBackupDemo:/var/opt/mssql/data/SQLShackDemo-2018525-10-24-39.bak /tmp

[root@localhost thanvitha]# docker cp /tmp/SQLShackDemo-2018525-10-24-39.bak SQLOpRestoreDemo:/var/opt/mssql/data/

Now, relate with the SQL instance by entering the required details. Here, IP address in addition to a port number is entered to obtain to an instance.

Next, select the restore icon within the dashboard.

Around the restore database screen, simply select the general section; decide the backup file by navigating within the backup directory.

From your files tab, specify location to relocate the results and log files.

Contained in the options tab, obtain the overwrite options.

You can too generate a script and run it or even press the restore button to take on the restore process.

The procedure history appears located on the right component of the SQL Ops Studio. This concludes that the particular database SQLShackDemo restored successfully.

You can too browse the SQL instance to verify the database.

That’s all for now…
Wrapping Up

Thus far, we percieve the tutorials to initiate a database backup and restore SQL Docker containers using SQL Ops Studio interface.

You can easlily say that it’s a light-weight type of SQL Server Management Studio (SSMS). The interface is very easy, straight-forward and self-explanatory. It truly is built with options are numerous and is exceptionally laid-out to walk you through common procedures.

Windows 10 and Windows Server 2019 include support for leap seconds

Microsoft is finally visiting support leap seconds with the next update to Windows 10, coming this fall, and also on Windows Server 2019. While leap seconds ‘s no concept that can also many customers, it is still a big deal because new government regulations using the US and EU require all devices to build support get rid of. Windows 10 and Windows Server 2019 results in being the first platforms with support for leap seconds simply because updates are pushed out.

For men and women unfamiliar with the industry of leap seconds, it happens every Eighteen months and results in a nice second. Increased second is added into UTC in order to adjust for your personal irregularities from your earth’s rotation and have it synced making use of mean solar time. Search for more about leap seconds on Wikipedia to be able to get more specifics about it.

To have a better notion of how this will work on Windows 10 and Windows Server 2019, it’s possible to take a look at the GIF.

Currently, the Windows clock will not count leap seconds and jumps straight away to 17:00:00 from 16:59:59. After your update, the clock will go from 16:59:59 > 16:59:60 > 17:00:00.

The further second are likewise a part of Precision Time Control that’s one of the many improvements being put into Windows Server 2019. An index of 10 networking features was posted on Microsoft’s Networking Blog which showcases new additions and improvements being included on the upcoming server OS via the company.

Windows Server 2019 Insider build 17713 for sale

A fresh Windows Server 2019 Insider preview build is currently available for testers to consider. The latest release moves the build number up to 17713, but there won’t be any new features to dive into.

Instead, today’s release comes alongside the launch of Windows Admin Center preview 1807. You’ll discover quite a few new odds and ends included in this release. Here is a look at the highlights:

July’s release brings a whole new streamlined experience to get your gateway to Azure, cultivate top customer request: The Virtual machine inventory page now supports multi-select to perform actions on multiple VMs at the same time!
Completely new functionality in this type of release consist of new file sharing functionality on the files tool, and Azure update management integration at the updates tool. These the latest features are described in detail below.
You can add, edit, acquire shares through the Files tool! Fine-tune access with the ability to add and take users and groups, and also control their permission level.
We’ve added it’s also possible to easily leverage the power of Azure of the own environment with Azure Update Management integration with the Updates tool. Azure Update Management is a free service in Azure Automation which allows you to easily keep the majority of the servers on your own environment contemporary, letting you manage and push updates from just one place, versus on a per-server basis.

When you are interested in thinking about either release, Windows Server 2019 build 17713 is out there to download now from Microsoft. The build is placed to expire on December 14, 2018. Meanwhile, Windows Admin Center preview 1807 is for sale from the “Additional Downloads” portion of the Windows Server Insider download site.

Microsoft SQL Server 2008 Support Extended for Cloud Migrations

Inside little under a year, Microsoft can finish support for SQL Server 2008 and 2008 R2. But businesses interested in eke a few more years from their existing database investments, minus their server hardware expenditures, can obtain more time by moving with the cloud.

Takeshi Numoto, corporate v . p . of Cloud + Enterprise at Microsoft, on July 12 reminded the database administrator and developer communities that SQL Server 2008 and 2008 R2 support relates to end on July 9, 2019. As they simply may continue to run their databases on those older versions, it may not be prudent to achieve this without Microsoft support, he cautioned.

When Microsoft pulls the plug within a software product, it effectively means the end of security updates. That’s, unless a buyer pays for premium support, different options typically offered to the biggest and wealthiest enterprise customers may well afford Microsoft’s bespoke support services. Most people likely to accept the actual of upgrading into a newer version like SQL Server 2017.

It really is impractical for organizations to upgrade their databases on the time allotted as many of them emotionally involved with critical workloads. So, Microsoft is giving SQL Server 2008 and 2008 R2 deployments a brand new lease on life, as long as they are migrated at the cloud.

Numoto announced “that Extended Security Updates is available for free in Azure for 2008 and 2008 R2 versions of SQL Server and Windows Server to aid secure your workloads for three more years following the end of support deadline,” in a very blog post. “You can rehost these workloads to Azure which has no application code change.”

Businesses will show another migration option between the fourth quarter of 2018. That’s when Microsoft is planning the entire availability turmoil Azure SQL Database Managed Instance, a managed database-as-a-service product that will enable people to move their SQL Server 2008 and 2008 R2 databases “with no application code change and near zero downtime,” Numoto stated.

If truth be told, the technology giant have their own sights focused on enterprise database workloads besides those running on SQL Server.

Microsoft is adding Azure Database for MySQL and PostgreSQL support toward the Azure Database Migration Service after July 2018, said Julia White, corporate vice chairman of Microsoft Azure, within the separate announcement. Currently in limited preview, the service supports SQL Server migrations to Azure SQL Virtual Machines and additionally Oracle databases to Azure SQL Database or Azure SQL Database Managed Instances.

Finally, Microsoft offers a new Solid-State Drive storage migration option for businesses that choose to transfer databases and also of types of workloads to Azure.

The latest Azure Data Box option called Data Box Disk allows customers to load up specialized drives with encrypted data and ship rid of it to Microsoft, how the company copies the results into a customer’s storage accounts. In the case of Data Box Disk, Microsoft ships 40TB of SSD storage capacity overnight help customers to copy their data deciding upon a SATA or USB connection and ask for a refund to Microsoft.

Building Better Test Data With SQL Provision

Development teams make software available for release once they are confident that it behaves consistently, as it was designed to behave, under as many different user workflows as they can test. Unfortunately, their test cells often don’t reflect the harsher reality of the live environments, where their software will encounter large volumes of real data and some “unexpected” data values.

Inevitably, this leads to unexpected issues when they deliver changes to the live environment. A good way to minimize this is to run early builds and tests with data that mimics what we expect in live environments so that the test results provide a more reliable indicator of the behavior seen on release. Unfortunately, data privacy and protection regulations will often restrict the movement and use of that data. So, how do you create test data that complies with regulations that prohibit the sharing of sensitive or personal data, but still looks and behaves like the real data?

If you need to do this manually using scripts, and you need to provision many development and test environments, then you’re going to find it difficult. This article will explain how to use SQL Provision to solve this problem in an automated fashion. It uses SQL Provision’s data masker tool to create SQL Server test data where any sensitive or personal information is obfuscated, but that retains the volume and characteristics of the predicted production loads. It then describes how to use SQL Clone to automate the provision of development and test environments with realistic yet compliant test data.

This combination of techniques is valuable in ensuring that we can evaluate the true behavior of our software before users encounter the application.
Security Matters

As we develop database software, there is often no substitute for the actual data that exists in a live environment. However, the security of this data is paramount, and most of our test and development systems have lower security configurations to allow for the flexible nature of the software development process.

While there are many complex issues around data security, data privacy, and data subject rights, most organizations realize that using actual data in less secure environments is a poor practice. Indeed, many countries around the world are introducing legislation that penalizes organizations for losing track of data. Laws, such as the GDPR, explicitly call out the need to anonymize or pseudonymize data in test and development environments.

Tools that provide data masking to randomize and protect sensitive data help ensure that the development team can still use data that is as close as possible in its character, volume, and distribution to the production data to evaluate the functionality and performance of software.
Random Data Is Not Enough

If security regulations mean you can’t use the production data, then you may decide simply to use “randomly-generated” test data. While this is helpful from a security standpoint, it introduces other issues into the development process.

Firstly, it won’t reflect the characteristics and distribution of the real data, so our application processes that use this data probably won’t behave or perform in the same way as they will when they encounter the real data. We need to “randomize” data to protect it, but we need to do so in a way that means it still looks and behaves like the real thing.

Secondly, the developers and testers who write the features and evaluate their correctness become used to certain data sets, and when they see familiar data in the results, they can rapidly evaluate whether the system is working as intended. For example, specific data values may need to be present to mimic a series of orders in an e-commerce system and ensure all the various order permutations are tested and working correctly. Similarly, the more realistic the data, the easier it is for users involved in UAT to decide whether or not what the developers are delivering meets their needs. If the user is expecting to see specific schedules for a registration system, for example, then seeing random values instead can often be misinterpreted as a software malfunction.

In short, using random data will protect us from accidentally revealing sensitive or personal data and may have the additional benefit of introducing “edge cases” that we may not otherwise have considered testing. However, it’s not sufficient on its own. The test data must also contain a curated set of values to test the known cases and boundaries of our software. This might be a set of general ledger transactions for our accounting software or a known series of schedules for a registration system. This curated set of data would contain no sensitive data and would be consistent in all our development and test databases.
Realistic but Compliant Test Data Using SQL Provision

Our goal is a lightweight, adaptable, and automated process that can provide developers and testers with a consistent set of data that is safe to use — since sensitive and personal data has been replaced with randomized values — but is also representative of the values found in the live system and contains the known values that are useful in their daily work.

However, any technique that relies on shifting vast quantities of data between systems, using database backup and restore, and running manual “data cleansing” scripts, will be slow, cumbersome, and hard to maintain. By contrast, the tools in this article use modern data masking and data virtualization to allow us to provision multiple environments consistently and throughout the software development lifecycle. I’ll explain how to use SQL Provision and a combination of SQL Clone and Data Masker for SQL Server to automate the following process steps:

Make a copy of production data
Perform random masking of any columns containing sensitive or personal data so that this data does not “leak” into less secure environments.
Inject a curated data set into the database — known data sets that contain enough values to adequately test the software, but without any sensitive information.
Use data virtualization to rapidly provision consistent databases on demand.

My previous article already demonstrated a simple way to implement Steps 1, 2, and 4. It used the PowerShell script shown in Listing 1 to automate the creation of data images using SQL Clone, the substitution of any sensitive or personal data with randomly-generated values, using Data Masker, and then SQL Clone to deploy a sanitized database clones to the development and test servers.

# Connect to SQL Clone Server

Connect-SqlClone -ServerUrl ‘YOUR CLONE SERVER URL HERE’

# Set variables for Image and Clone Location

$SqlServerName = ‘Plato’

$SqlInstanceName = ‘SQL2016’

$SqlDevInstanceName = ‘SQL2016_qa’

$ImageName = ‘DataMaskerBase’

$Devs = @(“Steve”, “Grant”, “Kathi”)

# Get the Clone instance for the source and dev environments

$SqlServerInstance = Get-SqlCloneSqlServerInstance -MachineName $SqlServerName -InstanceName $SqlInstanceName

$SqlServerDevInstance = Get-SqlCloneSqlServerInstance -MachineName $SqlServerName -InstanceName $SqlDevInstanceName

$ImageDestination = Get-SqlCloneImageLocation -Path ‘E:\SQLCloneImages’

$ImageScript = ‘e:\Documents\Data Masker(SqlServer)\Masking Sets\redgatedemo.DMSMaskSet’

# connect and create new image

$NewImage = New-SqlCloneImage -Name $ImageName -SqlServerInstance $SqlServerInstance -DatabaseName DataMaskerDemo -Destination $ImageDestination -Modifications @(New-SqlCloneMask -Path $ImageScript) | Wait-SqlCloneOperation

$DevImage = Get-SqlCloneImage -Name $ImageName

# Create New Clones for Devs

$Devs | ForEach-Object { # note – ‘{‘ needs to be on same line as ‘foreach’ !

$Image | New-SqlClone -Name “DataMasker_Dev_$_” -Location $SqlServerInstance | Wait-SqlCloneOperation


Listing 1: Deploy sanitized database clones for development and testing work

This article will build on that work to extend the solution to include step 3, the injection of a curated set of data.
Creating the Curated Data Set

The curated data set must contain values to test all known behaviors of our software, including edge cases. It’s an important step that cannot be easily automated because each database contains its own set of unique transactional elements, each of which must be tested to ensure that the software handles them properly. However, once the data set is created, the ongoing maintenance to add data to represent new cases is minimal.

In the previous article, as part of the image creation process, we used Data Masker to generate a data masking set for the dbo.dm_Customers and dbo.dm_Employees tables, replacing any sensitive values with randomized data.

We implemented the masking by using the -Modifications parameter of the New-SqlCloneImage cmdlet to specify the use of a data masking set. We supply the masking file to the New-SqlCloneMask cmdlet, which takes it and produces a data modification script that can be run as part of the image creation process.

Listing 2 shows two relevant lines of code:

$ImageScript = ‘e:\Documents\Data Masker(SqlServer)\Masking Sets\redgatedemo.DMSMaskSet’

# connect and create new image

$NewImage = New-SqlCloneImage -Name $ImageName -SqlServerInstance $SqlServerInstance -DatabaseName DataMaskerDemo -Destination $ImageDestination -Modifications @(New-SqlCloneMask -Path $ImageScript) | Wait-SqlCloneOperation

Listing 2: Using random data masking.

We will modify this section of the code to inject, subsequently, a curated dataset into each of these tables using two script files. While this can be one script, the abstraction of data sets into separate files is better for ongoing maintenance in a team environment. Each file would be created manually, perhaps using information from the production database, but with sensitive values changed, but in such a way that it retains the correct characteristics.

Listing 3 shows a sample of the contents of the dm_customer_testdata.sql file, which add the curated data to dbo.dm_Customers, changing the customer name, address, credit card values, and so on to realistic values but unlinked to an actual person.


Test data set for dm_customer

Created: 2018-05-03

Last Modified: 2018-05-31



(customer_id, customer_firstname, customer_lastname, customer_gender,

customer_company_name, customer_street_address, customer_region, customer_country,

customer_email, customer_telephone, customer_zipcode, credit_card_type_id, customer_credit_card_number)


( ‘9900000’, ‘Joe’, ‘Fanatic’, ‘M’, ”, ‘123 My St’, ‘Colorado’, ‘USA’, ‘jfanatic@gmail.redgate’, ‘303-555-1212’, ‘80237’, ‘4’, ‘1234 5678 9012 3456’),

( ‘9900001’, ‘Sarah’, ‘Jones’, ‘F’, ‘Acme, Inc’, ‘69958 Elm Ave’, ‘Virginia’, ‘USA’, ‘sarahj@acme.nowhere’, ‘757-555-1234’, ‘23455’, ‘4’, ‘12334 543 23546’),

( ‘9900002’, ‘Amanda’, ‘Smith’, ‘F’, ”, ‘401 W 5th St #234’, ‘New York’, ‘USA’, ‘amanda455@gmail.nowhere’, ‘212-555-4321’, ‘20334’, ‘3’, ‘9876 4322 5747 5555’),

( ‘9900003’, ‘Ian’, ‘Frank’, ‘M’, ‘British Stuff’, ’54 Old Oak Way’, ‘Cambridgeshire’, ‘UK’, ‘ifrank@bs.example’, ‘+44-564-123123’, ‘CB4 0WZ’, ‘2’, ‘5454 3423 5445 4545’)

Listing 3: The curated data set for customers

Listing 4 shows sample contents from the dm_employee_testdata.sql script file, which uses some famous names (from public data) plus some generated values to represent specific cases.


Test data set for dm_employee

Created: 2018-05-3

Last Modified: 2018-05-31


INSERT dbo.DM_EMPLOYEE (person_id, assignment_id, emp_id, first_name, last_name,

full_name, birth_date, gender, title, emp_data)


( 1018, 56, ‘MAN1018B’, ‘Peyton’, ‘Manning’, ‘Peyton Manning’, ‘1976-05-01’, ‘M’, ‘Mr.’, NULL),

( 1007, 98, ‘ELW1007D’, ‘John’, ‘Elway’, ‘John Elway’, ‘1968-03-21’, ‘M’, ‘Mr.’, NULL),

( 1012, 16, ‘STA500V’, ‘Roger’, ‘Staubach’, ‘Roger Staubach’, ‘1958-11-09’, ‘M’, ‘Mr.’, NULL),

( 1014, 23, ‘WIL855T’, ‘Serena’, ‘Williams’, ‘Serena Williams’, ‘1984-02-22’, ‘F’, ‘Mrs.’, NULL),

Listing 4: The curated data set for employees

In any live system, there would be additional rules to cover other entities, but for the sake of simplicity, this article will just include data for these two tables. The idea can be extended to more objects if necessary.
Injecting the Curated Data

Now that we have the two files containing curated data, let’s inject that data into the database. SQL Clone makes this easy because in addition to a data masking set created using Data Masker, we can specify a regular SQL file as well.

We use the New-SqlCloneSqlScript cmdlet to create an object that specifies the file, as shown in Listing 5. We use two variables; one for each object.

$EmployeeScript = New-SqlCloneSqlScript -Path ‘C:\Users\way0u\Documents\SQL Server Management Studio\Dm_Employee_testdata.sql’

$CustomerScript = New-SqlCloneSqlScript -Path ‘C:\Users\way0u\Documents\SQL Server Management Studio\Dm_customer_testdata.sql’

Listing 5: Adding the curated data scripts

Then, we just modify the New-SqlCloneImage line to add the curated data scripts. Place these objects after the data masker script, so that they execute last.

$ImageOperation = New-SqlCloneImage -Name $ImageName -SqlServerInstance $SqlServerInstance -DatabaseName ‘DataMaskerDemo’ -Destination $ImageDestination -Modifications @($Mask, $EmployeeScript, $CustomerScript)

Listing 6: Using SQL Clone to use both randomized and curated data sets during image creation

When we run this as part of our automation, we will see both the masking set and the SQL scripts execute as part of the process. When we query any clones that are created from this script, we will see our curated dataset included along with the other masked data.

In the real world, where applications are running, however, we often find that the users will stress our system with data that we haven’t used in our test and development environments. This forces teams to set aside “unplanned” maintenance time to fix bugs and performance issues, post-deployment.

Ingrained in the DevOps culture is the idea that to minimize this unplanned work, we need to ensure that our software behaves correctly, — even when handling edge cases — performs adequately, and integrates smoothly as part of routine daily development. This means being able to build databases and test them under conditions that mimic as closely as possible the live environment.

This article presents a method that allows us to use SQL Provision to build consistent, compliant, and useful databases on demand for development and test environments. By combining the power of Data Masker for SQL Server to randomize any sensitive data along with custom SQL scripts to represent a curated data set, we ensure that the resulting databases can be used for effective and reliable testing and are easy for humans to use when examining the data.