As president and lead security specialist of a company that performs security assessments for a wide variety of organizations, I often see the same issues over and over. So I’d like to share with the Ars audience the top 10 most common security problems I see on a daily basis, along with some tips on how to address them.
1) Monolithic Network
A monolithic network is one where all devices are on a single physical and logical network. Most small- and medium-sized businesses are set up in this way, usually for the sake of making things easy. The problem with this arrangement is that any one vulnerability exposes the whole network to further compromise. When combined with the common (but mistaken) stance that internal networks are fundamentally distinct from external networks, a monolithic network can be particularly risky.
In today’s security world, there is no longer a clear distinction between internal and external networks. The more modern client-side and web-based attacks make even the best perimeter defenses useless. As such, it’s important to act as if every piece of your network is as vulnerable to outside attacks as the machines currently sitting outside your firewall.
So how does one go about addressing this problem? Well, 99 percent of the solution lies in planning and research. First, start with a firewall. A well-segmented network should not have any two segments touch through anything but the firewall, which means that each segment needs a separate interface on the firewall. Many smaller IT shops know firewalls only as 2- or 3-sided devices that separate public, private, and maybe DMZ (demilitarized zone, a protected place for public servers). But what you really need is something with as many interfaces as you can afford. Many commercially supported firewalls come in relatively cheap 8 port configurations, but you can always load up an old PC with as many ethernet cards as it can hold (quad port cards help in these cases) and install a linux- or BSD-based firewall. Whichever option you choose, be sure that the hardware is reliable, because it will be a single point of failure for your network.
Once you have the hardware lined up, the first part of the planning process is to draw lines in the sand. This will differ for each organization, but make a list of every device on the network, and categorize them into departments, functions, and security needs. Segmentation should happen along these lines, separating groups from each other. Almost always, you’ll want to have an administrative network segment, which is where all the administrative interfaces for servers, routers, firewalls, printers, and the like should be pointed.
The second part of planning is research and testing. Before you make any real changes, you need to know exactly what devices communicate with each other and over what ports, and what sort of traffic will be moving through them. This step, while enormously time-consuming and sometimes frustrating, will give you insight in to your network that you’ve never had before. By forcing all traffic to flow through a firewall (and thereby having to create rules to allow it to do so), you gain an incredible amount of knowledge about how your network actually works. An added bonus is that you’ll likely find things running that shouldn’t be, or you’ll uncover things that have been long forgotten.
Once it is all said and done, you should have a network that is bundled in to nice, neat packages. You should now have hugely improved visibility, along with the tools to control traffic like you’ve never had before.
So, how does this make you more secure? Well, if your firewall has an integrated IDS/IPS (intrusion detection/prevention system), almost any attack traffic will be forced to pass over it, which will give it a much higher chance of being detected. Should something actually get past the firewall, the compromise will be contained within the network segment where it occurred, which makes it much easier to identify and clean up.
Despite all the awareness around patching that popular tools like Microsoft’s Windows Update have created in the last several years, it seems that some people still don’t get it. While I do generally find that the majority of Windows PCs are up-to-date on patches, most other systems are often left behind. This generally affects legacy systems (that old PC in the corner running some ancient piece of software that people still use once a month or so), but the problem is also rampant in infrastructure devices.
The common thinking (especially concerning infrastructure and extraneous devices like printers) is that patching is either unnecessary, or that the device is too critical to risk bringing down in order to do regular patching. Often, people take the attitude of, "I’ll patch it if it fixes a problem that immediately affects functionality," which is unfortunate.
There are also common misconceptions about the power available to an attacker in devices like printers and other embedded devices. Most people think that printers and the like are relatively simple machines that don’t have anywhere close to the capabilities of a normal desktop PC. In reality, many of the newer printers run full operating systems (often Linux or VXWorks), which have vulnerabilities just like Windows or any other desktop OS. What’s more, these devices are often in the best position from an attacker’s point of view, because they don’t get patched unless they’re broken, and they don’t get monitored for malicious traffic in the same way that client and server PCs do.
The real issue here is to be consistent. Include all devices in your patch management cycle (however limited it may be). Printers, routers, switches, wireless APs, and legacy machines all have vulnerabilities, just like clients and servers. In most cases, the damage that can be done by leveraging smaller devices in a network is equal to or great than it would be if a PC was compromised. Unfortunately, the options for securing an embedded device are more limited than those available for a PC, so keep them patched to avoid trouble.
One of the protocols I like to use as an example when discussing man-in-the-middle (MITM) attacks is Microsoft’s Remote Desktop Protocol (RDP). For those unfamiliar with it, RDP is a client and server application and protocol that’s used to remotely control the desktop of a Windows machine. Quite often, RDP used by IT departments to administer servers, and to provide remote help to employees. The problem with RDP is that, by default, it sends data in the clear. As such, anyone sitting on the local network is able to sniff traffic and recover the credentials used to make the RDP connection. In bigger shops, these credentials are often tied to directory systems, which mean that a set of valid administrator credentials will work over the whole network of machines.
To an attacker, this is gold. All an attacker has to do after compromising one machine is to wait for an administrator to do some maintenance work (a creative attacker would probably create a problem for the administrator to go check on) and harvest the credentials.
Microsoft has a page describing how to wrap RDP in an encryption layer like SSL or TLS, but that only prevents a MITM attack should the end user actually heed warnings about bad certificates when they come up. In the long run, I usually suggest that people look for an alternative to RDP that best fits their needs and doesn’t have the inherent security issues. Most of all, disable RDO if isn’t being used, and if it does absolutely need to be used, wrap it in encryption, and keep in mind that someone may be watching.