no-nonsense, objective, experienced, honest

The Blog of an honest Consultant.


DNS Unleashed

The Domain Name System (DNS) is an important core network service, because it serves as the central phonebook. Its structure is simple and scalable. A central configuration file (named.conf) defines the data a DNS server is responsible for and how it should handle requests. Most implementations support both authoritative and non-authoritative DNS, although it is not recommended to use several instances of DNS on a single system. If it is an authoritative DNS server, the aforementioned configuration file, named.conf is referred to as “zone files”.

When starting a DNS application the entire configuration including the zone data is loaded into the memory of the server. Back in 1987, this concept was described in Request for Comments (RFC) 1034 and 1035 and since then it has been expanded by missing features such as the Security Extentions (RFC 4034) or the DNS-based authentication (RFC 6698). Given the history, there is no doubt that DNS is an established network service, but often it is neglected by many.

DNS is central

Once you start analysing DNS data, it becomes interesting to notice all the vectors that are utilizing DNS. Of course, it starts with the usual clients on the network like laptops and PCs but at the same time, it reveals an application server that requires access to its database uses DNS to resolve the IP address of that system. Furthermore, every cloud-based service hides its physical addresses behind Fully Qualified Domain Names (FQDN). As more applications are being packaged “in the cloud”, DNS is becoming increasingly important. But even within enterprise networks the tremendous growth of IP-enabled devices requires DNS. This include things like Smart Objects belonging to the building infrastructure such as temperature sensors in workrooms, lighting controls or even locking systems. Let’s not forget the IPv6-enabled devices as they have been steadily increasing in numbers.

Ultimately, everything that is connected to the corporate network is dependent on DNS. This very fact makes DNS a core service that can be exploited. There are three possible approaches that are described in this blog.

Prevent Access to Malicious Sources

A typical attack on a corporate network affects internal clients. This compromise however, is only critical if the infected system attempts to load additional software. Since the physical addresses of the target systems changes constantly, attackers use DNS to locate IP addresses quickly.

Devices such as notebooks or PCs go well with antivirus software and similar products, but the landscape has changed. As already mentioned, more and more systems come on line, on which no other software can be installed. Apart from Smart Objects, it also includes network cameras, POS systems or Voice-over-IP phones. DNS can form the first layer of IT security for such platforms.
Many DNS implementations support Response Policy Zones (RPZ), where predefined queries can be blocked or diverted. This is often used in situations where incorrectly typed websites are required to be redirected to the desired website. RPZ can also be extended with a list of undesired locations, which is updated automatically. There are multiple providers who offer dynamic, up to the minute Security feeds to malware sites, spam domains and or botnets.  This way, a DNS server can suppress name resolution for such content.

DNS and Stochastics

The combination of Security Feeds with the RPZ function has one major drawback: The unwanted and malicious content must be known in order to be blocked. What has not been maintained in the list applies by default as trustworthy and gets resolved by DNS. Attackers generate DNS names at random and register new domains periodically.

To differentiate between good and bad names, the probability theory can help. Looking at the three DNS names from the picture below, it quickly becomes clear that the first DNS request is unusual. A combination of the letters “zwyv” must be such a random distinguished name. The “mx” of the second DNS query could be a bad name as well, but followed by “server” it becomes a usual abbreviation for mail servers. At first glance the third request with the “ee” and “aa” also looks like a randomly generated name. However this is the word for “dog” in the language of the Navajo Indians. It’s unusual to find this term in a domain registered in Germany. This means we shouldn’t only check the name of a resource, but also its context.

A gut instinct with regards to a domain name resolution can also be involved in the categorization of the DNS names. This can be explained mathematically. For the letters or combinations of letters of every single language in the world there is a probability distribution. Such a distribution can be divided into “n-grams”. In a common German text the unigram “e” is most likely to be seen with a probability of 16 percent, while one will find a “j” with a probability of 0.3 percent. With 4 percent the bigram “er” is more prominent than the bigram “it” with 0.8 percent.

DNS requests could easily be analyzed in this way. A DNS server in Germany could implement the probability distribution of n-grams for German and English and investigate any name resolution to that effect. A self-learning system in the background would correct the distribution of n-grams in that False Postives be avoided, because companies often choose cryptic and non-speaking names in DNS. But even these wordings are predictable.

Big Brother and DNS

By observing the DNS requests from a client, the operating system as well as installed applications can be recognized easily. Apple devices regularly try to reach the App Store. Applications with a cloud-based background like Evernote or Dropbox are quickly identified.

If this behavior is documented long term, DNS allows the detection of anomalies on client side. Let’s assume a PC boots usually every morning between 8AM and 9AM, this is reflected in the corresponding DNS requests for the Exchange Server and perhaps some various social networks. If the same DNS queries are happening at 3AM in the morning, it is not automatically an attack, because only trusted content is accessed. Nevertheless, it is an anomaly that should be investigated.

Another example occurs in infected systems, when a DNS request is sent to a known and trusted source frequently. Basically, there is no cause for concern if a POS system sends a request to, but connects to its controller most of the time. However, considering that this request occurs throughout the entire day every fifteen minutes, it’s an indication for a third-party software, which checks whether there’s a connection to the Internet to recharge other malicious software.

Such analysis cannot take place directly on a DNS server, because in addition to the request and its context, time also plays an important role. Anonymous forms of the information must be sent to a powerful database for evaluation.


To-date, Domain Name System unfortunately is greatly underestimated in enterprise networks, while its central position offers so many improvements for the safety of an infrastructure. While Security Feeds can block familiar content, a DNS server itself can analyze the n-grams of queries whereas on the client side, a powerful and potentially cloud-based database can detect anomalies.

Admin - 12:27 @ general, dns, security