Quantcast
Channel: Networking Guides – pfSense Setup HQ
Viewing all 115 articles
Browse latest View live

Asymmetric Encryption

$
0
0

asymmetric encryptionThe biggest disadvantage to using symmetric encryption algorithms relates to key management. In order to ensure confidentiality of communication between two parties, each communicating pair needs to have a unique secret key. As the number of communicating pair needs to have a unique secret key, As the number of communicating pairs increases, there is a need to manage a number of keys related to the square of the communicators, which quickly becomes a complex problem.

Introducing Asymmetric Encryption

Asymmetric encryption algorithms were developed to overcome this limitation. Also known as public-key cryptography, these algorithms use two different keys to encrypt and decrypt information. If cleartext is encrypted with an entity’s public key, it can only be decrypted by the public key. The basic principle is that the public key can be freely distributed, while the private key must be held in strict confidence. The owner of the private key can encrypt cleartext to create cyphertext that can only be decoded with its public key, thus assuring the identity of the source, or it can use the private key to decrypt cyphertext encoded with its public key, assuring the confidentiality of the data. Although these keys are generated together and are mathematically related, the private key cannot be derived from the public key.


Instead of relying on the techniques of substitution and transportation that symmetric key cryptography uses, asymmetric encryption algorithms rely on the use of large-integer mathematics problems. Many of these problems are simple to do in one direction but difficult to do in the opposite direction. For example, it is easy to multiply two numbers together, but it is more difficult to factor them back into the original numbers, especially if the integers used contain hundreds of digits. Thus, in general, the security of asymmetric encryption algorithms is dependent not upon the feasibility of brute-force attacks, but the feasibility of performing difficult mathematical inverse operations and advances in mathematical theory that may propose new “shortcut” techniques.

Asymmetric encryption  is much slower than symmetric encryption. There are several reasons for this. First, it relies on exponentiation of both a secret and public exponent, as well as generation of a modulus. Computationally, exponentiation is a processor-intensive operation. Second, the keys used by asymmetric encryption algorithms are generally larger than those used by symmetric algorithms, because the most common asymmetric attack, factoring, is more efficient than the most common symmetric attack: brute force.

Because of this, asymmetric encryption algorithms are typically used only for encrypting small amounts of information. In subsequent articles, we will example different asymmetric algorithms, such as Diffie-Hellman, RSA, and El Gamal.

External Links:

Public-key cryptography at Wikipedia

The post Asymmetric Encryption appeared first on pfSense Setup HQ.


netfilter Operation: Part Twelve (Firewall Builder continued)

$
0
0
Firewall Builder

Firewall Builder on startup.

NOTE: After I posted this article, I found out it’s possible to add objects/networks/hosts/etc. by right-clicking items on the object tree under the Linux version of Firewall Builder. This article has been amended accordingly.

In the previous article, I introduced Firewall Builder, including some notes on installation under Windows and Linux. In this article, I will step through the process of adding a firewall object and configuring it.

Firewall Builder: Creating a Firewall Object

In this example, I installed Firewall Builder under Linux Mint. Initially, there are three main options in the main dialog area: “Create New Firewall“, “Import Existing Configuration“, and “Watch ‘Getting Started’ Tutorial“. click on “Create New Firewall“, which will open the New Firewall dialog box.

Firewall Builder

The New Firewall dialog box.

In the New Firewall dialog box, enter the name for the new firewall (in this case OFFICE01). For the firewall software, select iptables from the dropdown box. For the OS, choose Linux 2.4/2.6 and click Next. The next window allows you to configure the interfaces on the firewall. You can do it manually, or if the firewall is running SNMP, you can discover them via SNMP. Here, we select Configure interfaces manually and click Next. This will bring up the manual configuration window. Enter the relevant information for each network interface. The name must correspond to the actual interface name (which is the same as if you had entered ifconfig on the Linux host), such as eth0. The Label is a human friendly name for easy reference such as OUTSIDE. When you are done entering the information for a given interface click Add. When you have entered the information for all interfaces (typically at least an INSIDE and OUTSIDE), click Finish. You must designate one of the interfaces on the firewall as the management interface, typically the INSIDE interface. Do this by navigating to the firewall in the object tree. As you select each interface in the object tree, there is a “Management interface” checkbox in the dialog area. Check this box for the interface you want to use. This will be the interface that Firewall Builder uses to connect and upload the firewall rules to.

Firewall Builder: Adding a Network

Firewall Builder

The button for adding new networks/hosts/services/etc is in the upper left, adjacent to the back arrow button.

Now that you have the basic firewall defined, you need to define something for it to talk to. In this case, we will assume that 192.168.1.0/24 is you internal network, and you want to allow outbound Web browsing and access to an internal Web server (WEB1). For starters, you need to create an object to represent the internal network. Follow these steps to create the network object:

  1. Navigate to Objects -> Networks in the object tree ((in order to make the object tree visible, you may have to go to the View menu and unselect Editor Panel).
  2. Right-click Networks and select New Network.
  3. Enter INTERNAL for the name of the network, and use 192.168.1.0 for the Address field. Enter 255.255.255.0 for the Netmask.
  4. Next, we’ll create an internal Web server at 192.168.1.2.  Right-click Objects -> Hosts in the object tree and select New Host.
  5. Enter WEB1 for the name of the object. Click the Use preconfigured template host objects check box and click Next.
  6. Select PC with one interface and click Finish.
  7. Expand the object tree to User -> Objects -> Hosts -> WEB1 -> eth0 -> WEB1. Edit the IP address to be 192.168.1.2 and click Apply.
  8. Next, define the appropriate services to allow Web-browsing. Navigate in the object tree to Services -> TCP, right-click on it, and select New Service.
  9. Enter HTTP for the name. Leave the source port ranges at zero, but change the destination port range to start and end at 80.
  10. Repeat the previous two steps for HTTPS on port 443 for secure Web pages.

Now that we have created the network object, in the next article, we will cover defining the firewall rules to allow inbound web traffic and uploading the rules to the firewall.

//
//

External Links:

The official Firewall Builder web site

Using Firewall Builder on Linux to Create Firewalls from Scratch on linux.com

Firewall Builder Tutorial: The Basics on YouTube

The post netfilter Operation: Part Twelve (Firewall Builder continued) appeared first on pfSense Setup HQ.

netfilter Operation: Part Thirteen (Firewall Builder, continued)

$
0
0
Firewall Builder

Adding inbound and outbound rules for the web server in Firewall Builder.

In the last article, we discussed the process of setting up a firewall object in Firewall Builder and adding a network to it, as well as adding a web server to the network. This seems like a lot of additional effort; however, the real advantage of an object-oriented approach is seen when it comes time to configure the rules. With all of the appropriate objects in place, let’s define the rules to permit the inbound HTTP traffic.

  1. Create a new rule by either navigating to Rules -> Insert New Rule from the menu at the top of the window, or click on the large plus (+) beneath the top menu.
  2. Allow inbound HTTP to WEB1. Click on WEB1 in the object tree and drag it to the destination cell for rule 0.
  3. Now drag the HTTP and HTTPS service from the object pane to the Service cell in rule 0.
  4. Right-click the big red dot in the Action column and select Accept. This allows the inbound Web traffic to access WEB1.
  5. To allow outbound Internet access. create another rule by either navigating to Rules -> Insert New Rule or by clicking on the big plus (+) beneath the menu.
  6. Drag and drop HTTP and HTTPS from the object tree into the Service column of rule one.
  7. Drag the Network object INTERNAL from the object tree to the Source column of the new rule.
  8. Right-click on the Action column for rule 1 and change the action to ACCEPT.
  9. Although our rules seem simple at the moment, let’s apply them to see show things work. First, save your work by navigating to File -> Save or File -> Save As.
  10. Next, right-click the OFFICE01 Firewall and select Compile.
  11. When the “Select Firewalls for compilation” window comes up, OFFICE01 should be checked. When satisfied with your selection, click Next. When the compilation is complete you should see “Success” in the “Progress” column. After verifying that the compilation was successful, click Finish.

Compiling and Uploading the Firewall Rules

Firewall Builder

Compiling the firewall rules.

The next step is to tell Firewall Builder where to find the SSH executables, because this is how Firewall Builder uploads the configuration to the firewalls. You need to have SSH working on both the firewall and the Firewall Builder console, assuming they are on different systems.

  1. Select Edit -> Preferences from the menu.
  2. Select the Installer tab and click the Browse button.
  3. Navigate to the location of your desired SSH utility and click Open. Note that if you are using Windows for the Firewall Builder host, you cannot select PUTTY.EXE; you must use the command-line PuTTY program PLINK.EXE. In Linux, you can leave the default setting (ssh).
  4. After selecting the SSH executable, click OK.
  5. Right-click the OFFICE01 firewall in the object tree, and select Install.
  6. Select the firewalls you wish to install, and click Next.
  7. Enter the username and password for the SSH connection.
  8. All other fields are optional; however, it is recommended that you check “Store a copy of the fwb on the firewall.” When satisfied with your choices, click Ok.

After the upload completes, you will get a status of “Success”. Checking your firewall (iptables -L) shows you the new rules that are listed.


// ]]>

One point that should be made is that you have to be careful when configuring the rules. It is always a good idea to creat the rules to permit administrative access before any others. This is because as soon as you configure the default policies to DROP, your SSH connection will no longer be permitted unless you have it added to the access list. And if you forget to do this, you could find that you no longer have remote access to your firewall after applying the policy. If that happens, you won’t even be able to remotely connect to update the policy and change the ACLs.

External Links:

The official Firewall Builder site

The post netfilter Operation: Part Thirteen (Firewall Builder, continued) appeared first on pfSense Setup HQ.

netfilter Operation: Part Fourteen (Firewall Builder, conclusion)

$
0
0
Firewall Builder

Adding inbound and outbound NAT rules in Firewall Builder.

As you can probably see, once you have completed the up-front work of defining your objects, adding or modifying rules is simple. Additionally, unlike the other free GUI solutions, Firewall Builder allows you to centrally and securely administer all of your (supported) firewalls from one location.

Notice that the default chains have rules matching the rule you configured in Firewall Builder, with a target of RULE_<RULE_NUMBER>. These additional chains are used to configure the logging. there s also a rule at the beginning of all chains to ACCEPT traffic related to an established session. This is generally desirable but is still configurable. To remote this automatically generated rule, select the firewall in the object tree and click on Firewall Settings in the dialog area. There is a checkbox that is selected by default called “Accept ESTABLISHED and RELATED packets before the first rule.” Although the Firewall Builder policies you’ve configured can handle any basic rules you might need, there are still a few more issues to cover. If you need to NAT with your Linux firewall, configuring it with Firewall Builder is easy. Follow these steps so that your firewall with NAT all the traffic from the internal network to the DHCP address used on the outside interface. This configuration is also known as source.nat because it is the source address that is being changed.

  1. In the Object Tree, select NAT.
  2. Move your mouse to the pane to the right of the Object Tree, right-click and select Insert Rule.
  3. Drag your INTERNAL network object from the object tree to the Original Src column in the new NAT policy.
  4. Drag the external interface on the firewall from the object tree to the “Translated Source” column in the NAT policy.


Now, save, compile and install the new policy. Now traffic originating from the internal network will be NAT-ed to the IP on the external interface. Although this source NAT configuration will allow all your internal users to reach the internet, you will need to use destination NAT if Internet users need to reach an internal server. Because the internal server is using a private IP address (which is not routable on the Internet), you need to translate this destination to an IP address that the external users can reach. To configure packets destined for the firewall’s single public IP address to an inside resource using destination NAT, follow these steps:

  1. In the Object Tree, select NAT.
  2. Right-click on rule number zero of the existing NAT ule and select Add rule at Bottom.
  3. Drag the firewall OUTSIDE interface into the Original Destination column of the new rule.
  4. Drag the appropriate services (HTTP and HTTPS) into the Original Service column of the new rule.
  5. Drag the internal server into the translated destination column of the new rule.

Firewall Builder: Creating a Time Policy

Firewall Builder

Creating a time policy with Firewall Builder.

Another nice feature is being able to create a time policy. In this example, we’ll alter the rules so the internal systems can only surf the web from noon to 1:00 PM:

  1. In the Object Tree, right-click Time, and select New Time Interval.
  2. In the “Name” field, we’ll call this rule LUNCH.
  3. In the two time fields provided, enter a time for the rule to START and a time for the rule to STOP. In this case we will enter 12:00 and 13;00 and leave the date field as zeros. You can check off every day of the week at the below the time fields, so the time interval applies to all days. When done, click Apply.
  4. Drag the LUNCH time interval from the Object Tree to te Time column of rule #1.

Now, rule #1 (which permits outbound web surfing) will only be active from noon to 1:00 PM. The ability to configure the rules to be active based on the time of day is a very powerful feature. If the organization is a strictly 8 AM to 5 PM type of place, you could configure the firewall to disable all access during non-business hours. Alternatively, certain non-business-related protocols could be enabled after the normal business day ends.

External Links:

The official Firewall Builder site

The post netfilter Operation: Part Fourteen (Firewall Builder, conclusion) appeared first on pfSense Setup HQ.

VPN Access Strategies

$
0
0

VPN accessA virtual private network (VPN) is exactly what it sounds like: the network connection you create is virtual, because you can use it over an otherwise public network. Basically, you take two endpoints for the VPN tunnel, and all traffic between these two endpoints will be encrypted so that the data being transmitted is private and unreadable to the system in between. Different VPN solutions use different protocols and encryption algorithms to accomplish this level of privacy. VPNs tend to be protocol independent, at least to some degree, in that the VPN configuration is not on a per-port basis. Rather, once you have established the VPN tunnel, all applicable traffic will be routed across the tunnel, effectively extending the boundaries of your internal network to include the remote host. In this article, we will examine some of the issues involved in implementing VPN access.

VPN Access: Network Design

One of your first considerations when planning to provide for VPN access is the network design. Because the VPN tunnel needs two endpoints, one will be the remote workstation. The other will be a specially configured device for that purpose. This is generally called a VPN concentrator, because it acts as a common endpoint for multiple VPN tunnels. [As noted previously in this blog, Soekris makes affordable VPN cards that offload the CPU of the the computing intensive tasks of encryption and compression.] The remote system will effectively be using the concentrator as a gateway into the internal network; as such the placement of the concentrator is important: in a highly secured environment, the concentrator is placed in a DMZ sandwiched between two firewalls, one firewall facing the Internet, and the other facing internally. While this type of arrangement is the most secure, it takes more hardware to implement.


Another way to place the VPN concentrator inside a DMZ is to use an additional interface on the firewall as the DMZ in a “one-legged” configuration. This saves you having to implement an additional firewall, but still provides some isolation between the concentrator and the rest of the internal network. If an attacker compromised a remote host who was VPNed into the concentrator or compromised the concentrator itself, they would still have a firewall between them and the internal network. The least preferable option is to place the concentrator inside the internal network. With this type of design, if the concentrator is compromised, the attacker would have full access to the internal network, with no firewalls to inhibit their activities. With any of these designs, you will have to permit the required ports through the firewall and forward them to your VPN concentrator in order to ensure VPN access.

VPN Access: Protocols

Another consideration in providing VPN access is the type of VPN protocol you want to use. IPsec is still the most widely deployed VPN technology for good reason. One is interoperability. As a widely used and tested standard, IPsec will work with virtually any modern firewall and operating system. The disadvantage of IPsec is that it can sometimes be difficult to configure properly, and there is zero margin for error on the configuration. Both ends have to se the same parameters for encryptions, hashing, and so forth, or the tunnel cannot be established. SSL is an increasingly popular choice for VPNs, largely because of its simplicity to implement.

Once you have chosen a design and VPN technology, you need to consider the administrative ramifications of offering remote access. Some level of training will be required. At the very least, they may require training to use the VPN software. It is a good idea to educate your users on good security habits as well. A determination will also need to be made as to whether remote users are allowed to use their own personal computers and/or laptops, or if they must use a company-provided computer for remote access. The former option carries with it many risks. When a remote user connects their personal computer to the corporate network, they may have spyware, a virus, or any number or potentially damaging conditions present on their system. Due to the fact that you probably do not have any administrative access to their systems, you may have no way to secure the personal systems even if you wanted. This is why most companies require that only corporate resources be allowed to connect to the company network.

VPN Access: Hardware

One last consideration for VPN access is hardware selection. Normal workplace desktop applications place very little strain on even a remotely modern processor. The same is not true when it comes to VPN connections. A single VPN connection requires little overhead and rarely impacts the remote user’s system unless it is especially underpowered. For the VPN concentrator, however, it will handle the encryption and decryption of multiple connections, in addition to managing the volume of network data that will be accessed through it. For this reason, if you anticipate more than just a couple of VPN connections to be used simultaneously, you will want to test and evaluate your hardware needs.

Internal Links:

pfSense VPN: Part One

pfSense VPN: Part Two

pfSense VPN: Part Three (PPTP)

External Links:

An Overview of VPN Concentrators at YouTube (from CompTIA’s Network+ certification training)

How the VPN Concentrator Works at networkingtechnicalsupport.blogspot.com

The post VPN Access Strategies appeared first on pfSense Setup HQ.

OpenVPN

$
0
0

OpenVPN

Introducing OpenVPN

One of the most commonly used open source SSL VPNs is OpenVPN, which uses TAP and TUN virtual drivers. TUN (network TUNnel) simulates a network layer device and it operates with layer 3 packets like IP packets. TAP (network tap) simulates a link layer device and it operates with layer 2 packets like Ethernet frames. TUN is used with routing, while TAP is used for creating a network bridge. For Linux version 2.4 or later, these drivers are already bundled with the kernel. OpenVPN tunnels traffic over the UDP port 5000. OpenVPN can either use the TUN drivers to allow the IP traffic; OpenVPN can also use the TAP drivers to pass the Ethernet traffic. OpenVPN requires configuration to be set in the configuration files. OpenVPN has two secure modes. The first OpenVPN mode is based on SSL/Tls security using public keys like RSA, and the second is based on using symmetric keys or pre-shared secrets. RSA certificates and the keys for the first mode can be generated by using the openssl command. Details about these certificates or the private keys are stored in our *.cnf files to establish vpn connection.

The .crt extension will denote the certificate file, and .key will be used to denote private keys. The SSL-VPN connection will be established between two entities, one of which will be a client, which can be your laptop, and the other will be a server running at your office or lab. Both these computers will have .conf files, which define the parameters required to establish an SSL-VPN connection.

Open VPN: The Pros and Cons of SSL VPN

SSL VPN is one way to transfer the information since a web browser can be used to establish an SSL VPN connection. Since SSL VPN is clientless, it will result in cost savings and can be configured to allow access from corporate laptops, home desktops, or any computer in an Internet cafe. SSL VPNs also provide support for authentication methods and protocols, some of which include:

  • Active Directory (AD)
  • Lightweight Directory Access Protocol (LDAP)
  • Windows NT LAN Manager (NTLM)
  • Remote Authentication Dial-In User Service (RADIUS)
  • RSA Security’s RSA ACE/Server and RSA SecurID

Many SSL VPNs also provide support for single sign-on (SSO) capability. More sophisticated SSL VPN gateways provide additional network access through downloadable ActiveX components, Java applets, and installable Win32 applications. These add-ons help remote users access a wide range of applications, including:

  • Citrix MetaFrame
  • Microsoft Outlook
  • NFS
  • Remote Desktop
  • Secure Shell (SSH)
  • Telnet

However, not all SSL VPN products support all applications.

SSL VPN can also block traffic at the application level, blocking worms and viruses at the gateway. SSL VPN is again not bound to any IP address. Hence, unlike IPsec vpn, connections can be maintained as the client moves. SSL VPN differs from IPsec VPN in that it provides fine-tuned access control. By using SSL VPN, each resource can be defined in a very granular manner, even as far as a URL. This feature of SSL VPN enables remote workers to access internal web sites, applications, and file servers. This differs from IPsec VPN, since the entire corporate network can be defined n a single statement. SSL-based VPN uses Secure HTTP on TCP port 443. Many corporate network firewall policies allow outbound access for port 443 from any computer in the corporate network. In addition, since HTTPS traffic is encrypted, there will be limited restrictive firewall rules for SSL VPN.


As you know, SSL-based VPN offers a greater choice of client platforms and is easy to use. However, an organization that wants to be sure their communication channel is encrypted and well secured will never assume that any computer in an Internet cafe is trusted. This in turn requires a trust association with an untrusted client connection. To address the concern of an untrusted client, whenever a client from an untrusted platform connects to the VPN, a small Java applet is downloaded to the client that searches for malicious files, processes, or ports. Based on the analysis of the computer, the applet can also restruct the types of clients that can connect. This may sound theoretically feasible; to do it practically requires the mapping of policies of one anti-virus and anti-spyware tool into an endpoint security tool used by VPN. In addition, these applets are prone to evasion and can be bypassed. However, note it carefully; you also need to have administrative access to perform many of the operations like deleting temporary files, deleting cookies, clearing cache, and so forth. If you have administrative rights in an Internet cafe, you can assume that the system will be infected with keystroke loggers and sophisticated malicious remote access tools. [A good example would be Back Orifice.]

By using SSL VPN, a user can download sensitive files or confidential, proprietary corporate data. This sensitive data has to be deleted from the local computer when an SSL VPN is terminated. To ensure the safety of confidential data, a sandbox is proposed and used. A sandbox is used to store any data downloaded from a corporate network via SSL VPN. After the SSL VPN session is terminated, the data in the sandbox is securely deleted. After a session is terminated, all logon credentials require deletion as well. You know that SSL VPN can be established even from a cyber cafe. It might happen that a user can leave the system unconnected. To prevent such issues, periodic authentication is required in some systems. as SSL VPN works on the boundary of Layers 4 and 5, each application has to support its use. In IPsec VPN, a large number of static IP addresses can be assigned to the remote client using RADIUS. This in turn provides the flexibility to filter and control the traffic based on source IP address. In the case of SSL VPN, the traffic is normally proxies from a single address, and all client sessions originate from this single IP. Thus, a network administror is unable to allocate privileges using a source IP address. SSL-based VPN allows more firewall configurations as compared to IPsec VPN to control access to internal resources. Another cause of concern with SSL-based VPN is packet drop performance. IPsec will drop the malformed packet at the IP layer, whereas SSL will take it up the layer in the OSI model before dropping it. Hence, a packet will have to be processed more before it is dropped. This behavior of SSL-based VPN can be misused, used to execute DoS attacks, and if exploited, can result in a high capacity usage scenario.

External Links:

The official OpenVPN site

OpenVPN on Wikipedia

TUN/TAP on Wikipedia

OpenVPN DD-WRT Wiki

The post OpenVPN appeared first on pfSense Setup HQ.

X Window System

$
0
0

X Window

Introducing the X Window System

X Window is the underlying management system for most Unix and Linux GUIs. It takes an entirely different architectural approach than a Microsoft Windows system, in that the X Window system is set up in a client-server architecture similar to VNC. In this model, the X server communicates with various client programs. The server accepts requests for graphical output (windows) and sends back user input (from keyboard, mouse, or touchscreen).

When reading the X Window documentation, you will find that they use the terms server and client in the reverse of what would seem intuitive, meaning the server is where the display is being generated, not the remote machine to which you are connecting. The server in this context may function as an application displaying to a window of another display system, a system program controlling the video output of a PC, or a dedicated piece of hardware. This client-server terminology (the user’s terminal being the server and the applications being the clients) often confuses new X users. But X takes the perspective of the application, rather than that of the end-user. Since X provides display and I/O services to applications, it is a server. Applications use these services; thus they are clients.


Most current implementations of the X Window system are based on the X.Org foundation, which is the open source implementation of the X11 protocol. A closely related project is the XFree86 Project, which is the open source version of the X Window system (which uses the X11 protocol). X11 is the protocol that is used to transfer information about the GUI between the server and the client. The end result of these design decisions is that much like Windows’ built-in terminal server support, two Linux systems can remote access each other via a GUI virtual desktop.

You can configure the X Window System to permit connections from remote systems without any third-party software. While this works, the evolution of desktop Window Managers and common software packages has rendered this method inefficient. A much more robust way to accomplish the same thing is using NX technology developed by NoMachine, which is a highly optimized process and protocol to make X sessions available remotely. The NoMachine remote desktop is available for free (client and server) from the official NoMachine website. Commercial versions are also available. In December 2010, NoMachine announced that forthcoming NX releases (4.0 and up) would be closed source. Fortunately, an open source version of the NX server is called FreeNX, and is available from the official FreeNX website. FreeNX does not support relaying sounds to the client, while the NoMachine server does.

External Links:

X Window System on Wikipedia

NX technology on Wikipedia

The official NoMachine web site

The official Free NX web site

The post X Window System appeared first on pfSense Setup HQ.

NoMachine Server Installation and Configuration

$
0
0
NoMachine

Installing the NoMachine server using the Debian package installer (dpkg).

In the previous article, we introduced the X Window system and discussed different X Window remote desktop options. In this article, I will cover installation of the NoMachine remote desktop server and the various server options.

To set up the NoMachine server, download and install it whatever method is appropriate for your Linux distribution. As far as I know, it is not in any of the repos. To install the NoMachine server under Linux Mint, I downloaded NoMachine for Debian Linux and used the Debian package installer to install it:

sudo dpkg -i nomachine.4.1.29.5.i386.deb

After a few minutes, the NoMachine server was installed and ready to use.Depending on the distribution you are using, the installation may be more involved. Most of the major distributions should have packages available that make the installation relatively painless.


Configuring the NoMachine Server

Once it is installed, you can launch the NoMachine server (on Linux Mint, it can be found in the Internet program group). The NoMachine server interface has two tabs: one called “Connected users” and a second for “Active transfers“. There is also a “Connections” option to toggle allowing connections. There is also a button called “Connection preferences“.

NoMachine

The Services tab under Connection Preferences in the NoMachine server interface.

In “Connection preferences”, there are six separate tabs: “Services“, “Security“, “Devices“, “Transfers“, “Performance“, and “Updates“. “Services” lists the network services running and allows you to configure the services. In this case, we are running the NX service on port 4000. There are two other options: “Start automatic services at startup“, which causes services marked as automatic to be started when the machine starts. “Advertise this computer on the network” causes NoMachine to broadcast the required information to let other computers discover it on the local network.

The next tab is “Security Preferences“. There are three options here: “Require permission to let remote users connect“, which if selected requires the local user to accept the connection before the remote user can connect to the desktop. The second is “Require permission to let the remote users interact with the desktop“, which if selected causes the users to connect in view-only mode. The third option is “Hide the NoMachine icon in system tray“; if this is selected, the NoMachine menu won’t be accessible in normal conditions, but notifications will be still displayed when somebody connects.

The “Devices” tab controls what devices are made available to the remote user. Disks, printers, USB devices, smart card readers, and network ports are selected by default. There is also an “Enable audio streaming and microphone forwarding” check box which is selected by default. The “Transfers” tab controls transfer preferences. Here you can allow or deny the uploading of files by remote users, and allow or deny the downloading of files. You can also disallow files bigger than a certain size for both uploads and downloads, and set the directory to which files are saved.

The “Performance” tab controls system performance and has four options. “Use a specific display encoding” allows the user to select from a dropdown list of encoding algorithms, including VP8, MJPEG and H264. “Request a specific framerate” allows the user to select a framerate from a dropdown list (a higher frame rate uses more processing power). “Use acceleration for display processing” uses the GPU and accelerated graphics (when available) for better performance. “Use lightweight mode in virtual sessions” causes virtual sessions to only use the X protocol compression, which may require less bandwidth and less computing resources.

The final tab is “Update“, which controls update preferences. There is an “Automatically check for updates” check box, as well as a button to check for updates immediately. This tab also includes information about the product, version number and platform.

Now that we have covered server configuration, in the next article we will cover accessing the system remotely using NoMachine.

External links:

The official NoMachine site

The post NoMachine Server Installation and Configuration appeared first on pfSense Setup HQ.


NoMachine Client Installation and Configuration

$
0
0
NoMachine

Running the ps command on a computer running Xvnc.

In the previous article, we covered installation of the NoMachine server under Linux Mint. In this article, we will cover installing and running the NoMachine client under Windows.

First, we have to make sure vncviewer is running on the computer running the NoMachine server. This can be done by typing vncserver in a terminal window. You can also specify several options. For example:

vncserver -geometry 800×600

would create a VNC desktop 800 pixels wide and 600 pixels deep. The following command:

vncserver :1

would create a VNC desktop with a display number of 1 (omitting this parameter causes VNC to use the next available display number). This command:

vncserver -depth 24

creates a VNC desktop with a pixel depth of 24 (true color). Other permissible values are 8, 16 and 15. Consult the vncserver man page for other options.

Once you have started vncserver, you probably want to check to make sure it is running. To do so, you can type:

ps -eaf | grep Xvnc

If XVnc is running, you should see a line similar to the one in the screenshot shown at the beginning of this article.

Downloading and Installing the NoMachine Client in Windows

NoMachine

The NoMachine setup wizard.

Now we need to install the NoMachine client in Windows. First, we download the client at the NoMachine web site. Then run the NoMachine executable, either by selecting Run from the Start menu and selecting the executable, or by clicking on the executable in Windows in windows Explorer.

You will be presented with the NoMachine Setup Wizard dialog box. Click on “Next” to continue installation. The next dialog box contains the End-User License Agreement (EULA); if you agree with the terms, click on the “I accept the agreement” radio button and click “Next“. The next dialog box allows you to change the installation path; if you want to install the NoMachine client into a different directory, change it here and click “Next“. The software will install now. You may see dialog boxes which read “The software you are installing has not passed Windows Logo testing”; if so, click on “Continue Anyway” to continue. Once installation has completed, a dialog box will appear to inform you so; click on “Finish“.


From the Start menu, navigate to Programs -> NoMachine -> NoMachine to start the NoMachine client. If this is the first time you are running the program, the first window will show you how to use the program. Click on “Continue” to advance to the next screen.

If this is the first time you have run the NoMachine client, the next screen will be the “Create New Connection” wizard. Here you can enter the IP address of the computer to which you want to connect. Once you have set up the remote computer, double-click on it to connect to the computer.

After a few seconds, the NoMachine client will prompt you for login credentials. Enter your username and password; if you want NoMachine to save the password, check the “Save the password in the connection file” check box. Once you are done, click “OK“. After another few seconds, you should be connected to the remote computer. If this is the first time you have run NoMachine, there will be two screens with instructions on how to use the interface. After that, You will see a screen that gives you the following choices: [1] Display the menu panel covering all screen (the default), or [2] Display the menu panel as a window. Choose the way you want the menu panel displayed and click “OK“.

The next screen controls the option for audio streaming. Audio is forwarded to the client, but you can control whether audio is played on the remote server. Check the “Mute audio on the server while I’m connected” to mute the audio, and click on “OK“. The next screen controls the option for display resolution. If the remote machine has a different resolution than the client, you can check the “Change the server resolution to match the client when I connect” check box to make sure the resolution matches. Click the “OK” button when you are done choosing this option.

Now you should be connected to the remote desktop. If you want to change the settings for the client, hover your mouse over the upper right corner; when the page-turning icon appears, click on it and the settings will appear. There are seven options here: “Input“, “Devices“, “Display“, “Audio“, “Mic in“, “Recording“, and “Connection“. Click the icon for the settings you want to change. You can now change settings; click on “Done” when you are finished and click “Done” again to exit out of the settings screen and return to the remote desktop.

External Links:

The official NoMachine web site

The post NoMachine Client Installation and Configuration appeared first on pfSense Setup HQ.

Apache Server Vulnerabilities

$
0
0

Apache server

The Apache Web Server

The Apache HTTP Server is a web server application based on NCSA HTTPd. Development of Apache began in early 1995 after work on the NCSA code stalled, and it quickly overtook HTTPd as the dominant web server, and has been the most popular web server in use since April 1996. As of June 2013, Apache was estimated to server 54.2 percent of all active websites, so if you come across a website, there’s a better than even chance that it’s hosted by an Apache server (this site is).

The Apache server supports a variety of features. Many of these features are implemented as compiled modules which extend the core functionality. Some common language interfaces support Perl, Python, Tcl, and PHP. Other features include Secure Sockets Layer and Transport Layer Security support. Because the source code is freely available, anyone can adapt the server for specific needs, and there is a large public library of Apache server add-ons.

Although the main design goal of the Apache server is not to be the fastest web server, Apache does have performance similar to other high-performance web servers. Instead of implementing a single architecture, Apache provides a variety of MultiProcessing Modules (MPMs) which allow Apache to run in a process-based, hybrid (press and thread) or event-hybrid mode, to better match the demands of each particular infrastructure. The multi-threaded architecture implemented in Apache 2.4 should provide for performance equivalent or slightly better than event-based webservers.


Apache Server Vulnerabilities

All software systems have the same general types of vulnerability and Apache is no different. It can be adversely affected by any one of the following problems: [1] poor application configuration; [2] unsecured web-based code; [3] inherent Apache security flaws, and [4] fundamental OS vulnerabilities.

Apache has many default settings that require modification for secure operation. Nearly all configuration information for Apache Web server exists within the httpd.conf file and associated Include files. Because many configuration options exist within these files, it can be easy to make configuration errors that expose the application to attack.

The second manner in which vulnerabilities are exposed is via poorly implemented code on the Apache server. Often, Web developers are far more concerned with business functionality than the security of their code. For instance, poorly written dynamic web pages can be easy denial of service (DoS) targets for attackers, should coded limitations be absent from back-end database queries. Simply publishing confidential or potentially harmful information without authentication can provide enemies with ammunition for attack. For these reasons, you must review and understand not only the Apache application but the information and functionality being delivered via the system.

As with Microsoft’s IIS server, vulnerabilities can exist within the Apache server’s application code itself. There are many means by which hackers can breach or disable an Apache system, such as:

  • Denial of Service (DoS)
  • Buffer overflow attacks
  • Attacks on vulnerable scripts
  • URL manipulation

Occasionally, Apache security flaws are discovered and announced by Apache or by various security groups. The Apache development team is typically quick to respond and distribute patches in response to such events. For this reason, it is critical that you be vigilant in your attention to security newsgroups and to Apache’s security advisory site.

Another source of vulnerability within an Apache web server could occur as a result of foundational security flaws in the OS on which Apache is installed. Apache can be run on just about any OS. You should be very familiar with the specific security vulnerabilities for any OS on which you run Apache.

In the next article, we will discuss the merits of patching and securing the OS as a means of securing your Apache server.

External Links:

The official Apache web site

Apache HTTP Server at Wikipedia

The official Apache Software Foundation web site

Apache web server resource site

The post Apache Server Vulnerabilities appeared first on pfSense Setup HQ.

Apache Server Hardening: Part One

$
0
0

Apache server hardeningIn the next few articles, we will take a look at Apache server hardening. We will begin by considering OS vulnerabilities.

Apache Server Hardening: Patch the OS

Code deficiencies can exist in OSes and lead to OS and application vulnerabilities. Therefore, it is imperative that you fully patch newly deployed systems and remain current with all released functional and security patches. At regular intervals, review the published vulnerabilities at your OS manufacturer’s web site.

This table lists some popular OSes and their security sites:

Operating System Security Information Site
Oracle Solaris www.oracle.com/technetwork/server-storage/solaris11/technologies/security-422888.html
Microsoft www.microsoft.com/technet/security/default.mspx
Mac OS www.apple.com/support/security
RedHat Linux www.redhat.com/security
FreeBSD www.freebsd.org/security
OpenBSD www.openbsd.org/security.html

Because Apache is so often run on various Unix, Linux, and BSD distributions, we include patching steps here so that you can confidently deploy your Apache web server on a well-hardened foundational OS, which will facilitate Apache server hardening. In general, however, each vendor provides a full suite of tools and information designed to help you remain current of their released software updates. Become familiar with each of your vendor’s OS patching methodologies and software tools. As the security administrator, you should reserve predetermined time periods for maintenance windows during episodes of low customer activity. However, the discovery of serious OS vulnerabilities could necessitate emergency downtime while patches are applied.


Like patching, all systems used to provide services such as HTTP and HTTPS to customers should be thoroughly hardened before they are placed in a production environment. Hardening includes many steps such as the following:

  • Setting file permissions
  • Locking down accounts
  • Establishing proper OS security policies
  • Configuring host-based firewalls
  • Disabling vulnerable services

Now that we have a secure OS, it’s time to discuss how to properly and securely configure the Apache web server.

The Apache Web server is a powerful application through which you can deliver critical business functionality to customers. With this power comes the possibility of misuse and attack. To ensure that your Apache server is running securely, we have compiled a series of steps for Apache server hardening. You might also want to read additional information or review other Apache security checklist documents before deploying your Apache server. An excellent reference guide is the CIS Apache Benchmark document available at the Center for Internet Security and the NIST Apache Benchmark document available at csrc.nist.gov/checklists/repository/1043.html.

You should follow three general steps when securing the Apache web server:

  • Prepare the OS for Apache web server
  • Acquire, compile, and install the Apache web server software
  • Configure the httpd.conf file

We will cover all three of these crucial steps in future articles.

External Links:

13 Apache Web Server Security and Hardening Tips at www.tecmint.com

Apache 2.0 Hardening Guide

Apache Server Hardening & Security Guide at chandank.com

The post Apache Server Hardening: Part One appeared first on pfSense Setup HQ.

Apache Server Hardening: Part Two

$
0
0

Apache server hardeningAfter you’ve patched and hardened your OS, you’ll need to accomplish a couple quick tasks prior to obtaining, compiling and installing the Apache software. A critical part of installing Apache is to provide a user account and group that will run the web server. It is important that the user and group you select to be unique and unprivileged to reduce exposure to attack.

It is important not to run your Apache web server as the user Nobody. Although this is often a system administrator favorite and seemingly unprivileged account for running Apache and other services, the Nobody account has historically been used for root-like operations in some OSes and should be avoided.

Configuring Accounts

Choose and configure a user and group account using the following Unix OS steps. In this example, we will use wwwusr and wwwgrp as the Apache username and group, respectively.

  1. As root from the command line, type groupadd wwwgrp to add a group.
  2. Type useradd -d /usr/local/apache/htdocs -g wwwgrp -c “Apache Account” -m wwwusr to add the user.

The second step creates the user account but also creates a home directory for the user in /usr/local/apache/htdocs.

After creating the user and group accounts, you’ll need to lock down the wwwusr user account for use with Apache. By locking the account and providing an unusable shell, this action ensures that no one can actually log into the Web server using the Apache account:

  1. As root from the command line, type passwd -l wwwusr to lock the Apache account.
  2. Type usermod -s /bin/false wwwusr to configure an unusable shell account for the Apache account.

Now you’re ready to get the Apache software and begin installation.

Downloading and Verifying Apache

Because Apache is open-source software, you can freely download the binaries or source code and get going with your installation. Although there are many locations from which you could download the software, it is always best to obtain the Apache software directly from an approved Apache Foundation mirror listed at the mirror list page of official Apache site.


You’ll need to decide whether to install the server using precompiled binaries or to compile the source code yourself. From a security and functionality perspective, it is usually better to obtain the source code and compile the software, since doing so permits fine-tuning of security features and business functionality. perspective, it is usually better to obtain the source code and compile the software, since doing so permits fine-tuning of security features and business functionality. Here we will discuss compiling the Apache server from source code, starting with verifying the integrity of your download.

To verify the checksum, you will need additional software called md5sum that might be part of your OS distribution. If it is not, you can download the software as part of GNU Coreutils available at the Coreutils page of the official GNU Operating System website. To verify the Apache checksum, perform the following steps. In this example, we’ll use Apache version 2.4.9:

  1. As root from the command line, change directories to where you downloaded the Apache source code tarball and checksum file.
  2. Type cat httpd-2.4.9.tar.gz.md5 to see the exact md5 checksum string. You should see something like f72fb1176e2dc7b322be16508isl39d httpd-2.4.9.tar.gz.
  3. from the same directory, type md5sum httpd-2.4.9.tar.gz.md5 to obtain the checksum from the tarball. You should see the identical string shown in Step 2. If you do, the software you downloaded is authentic.

In the next article, we’ll cover compiling Apache.

External Links:

The Official Apache site

The official GNU Operating System site

The post Apache Server Hardening: Part Two appeared first on pfSense Setup HQ.

Apache Server Hardening: Part Three

$
0
0

Apache server hardeningIn the previous article, we discussed configuring the underlying OS and download and verifying Apache. After downloading and verifying the Apache source code, you’ll need to do some research to understand what options you want to compile into your web server. There are many modules, such as mod_access and mod_ssl, that can be added into your server to provide additional functionality and security. A full list of Apache Foundation-provided modules can be found at the Apache web site. When choosing modules, be sure you select only what you need. Compiling extra, unnecessary modules will only result in a less secure, slower web server.

You should use caution in enabling and disabling services at compile time. Before you do so, determine the dependencies of your web server code. Failure to understand what services you require to operate could result in loss of critical functionality. It might be prudent to test your configuration in a lab environment before disabling services on a production server.


Once you’ve decided which modules and configurations to use, you should accomplish one final task before building your software. Obscure the Apache version information located in the ap_release.h file located in the $[ApacheSrcDir]/include directory. To do so, use vi, gedit, or the editor of your choice and alter the following lines to change the Software Vendor (Apache Software Foundation) information:

#define AP_SERVER_BASEVENDOR “Apache Software Foundation”
#define AP_SERVER_BASEPRODUCT “Apache”

In general, you’ll need to perform three steps to compile and install your Apache Web server, as follows:

  1. From the $[ApacheSrcDir] directory, run ./configure.
  2. after configuring source, run ./make to compile the software.
  3. After compiling the software, run ./make install to install the Apache web server.

During the first step, you’ll decide what is added to the Apache server at compile time.

Add/Remove Module name Purpose
Remove Status Provides potentially dangerous information via server statistics web page
Remove Info Provides potentially dangerous configuration information
Remove Include Provudes server-side include (SSI) functionality
Remove userdir Permits users to create personal homepages in ~user home directories
Add mod_ssl Provide cryptography using the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols
Add mod_log_forensic Increases granularity of logging to forensic levels
Add mod_unique_id Required for mod_log_forensic module

mod_security, a third-party Apache module available from www.modsecurity.org, provides application firewall intrusion protection and prevention. To enable mod_security, you must download and compile the software into the Apache web server. Adding mod_security increases the secure operation of your Apache web server and adds functionality including, but not limited to, the following:

  • HTTP protocol awareness
  • Anti-evasion technique prevention such as URL encoding validation and URL decoding
  • Enhanced audit logging
  • Bult-in chroot functionality
  • Buffer overflow protection
  • HTTPS filtering

We will enable mod_security in our example because it adds so many security features to our system. Once you have downloaded mod_security source from the download page of the mod_security website, perform the following steps as root:

cd $[modsecuritySrcDir]/apache2

mkdir -r $[ApacheSrcDir]/modules/security

cp mod_security.c Makefile.in config.m4 \ $[ApacheSrcDir]/modules/security

cd $[ApacheSrcDir]

./buildconf

Now mod_security appears like other Apache modules. When we compile Apache, we will enable it using the command -enable-security. There are many options to consider in configuring the Apache source code for compilation. To view a list of options, issue the command ./configure –help from the $[ApacheSrcDir] directory.

After successfully configuring the source code, proceed with running make and make install. You will see a message indicating successful completion of building and installing Apache. Now that we have successfully installed the Apache web server software, we will proceed to the next step: configuring the httpd.conf file for secure operation. We will cover that in the next article.

External Links:

The official Apache website

The official ModSecurity website

The post Apache Server Hardening: Part Three appeared first on pfSense Setup HQ.

Apache Server Hardening: Part Four (httpd.conf)

$
0
0

httpd.confIn the previous article, we looked at compiling and installing Apache and discussed the benefits of mod_security. In this article, we will cover httpd.conf configuration.

httpd.conf File Configuration

The Apache web server stores all its configuration data in the httpd.conf file located in the $[ApacheServerRoot] directory, which is, in our example, /usr/local/apache. The httpd.conf file includes many directives that can be categorized into the following sections:

  • Server Directives
  • User Directives
  • Performance/Denial of Service directives
  • Server Software Obfuscation Directives
  • Access Control Directives
  • Authentication Mechanisms
  • Directory Functionality Directives
  • Logging Directives

Not all directives play a significant role with regard to security. In this article, we will discuss the directives that impact the security of your Apache server. Furthermore, because we disabled a lot of functionality at compile time, some directives that would normally be dangerous do not need to be removed, since they were not added into the compiled Apache binaries. There may also be other configuration files, called Include files, associated with the httpd.conf file. Since we have enabled mod_security, there is a long list of potential configurations to make in an Include filled called modsecurity.conf, which is usually located in the $[ApacheServerRoot]/conf directory.


In this section, I included the modsecurity.com recommended mod_security configuration. For more information about configuring this file, refer to the mod_security documentation.

Recommended modsecurity.conf file:

# Turn ModSecurity On
SecFilterEngine On

# Reject requests with status 403
SecfilterDefaultAction “deny.log.status.403″

# Some sane defaults
SecFilterScanPOST On
SecFilterCheckURLEncoding On
SecfilterCheckUnicodeEncoding Off

# Accept almost all byte values
SecFilterForceByteRange 1 255

# Server masking is optional
# SecServerSignature “OurServer”

SecUploadDir /tmp
SecUploadKeepFiles Off

# Only record the interesting stuff
SecAuditEngine RelevantOnly
SecAuditLog logs/audit_log

# You normally won’t seed debug logging
SecFilterDebugLevel 0
SecFilterDebug logs/modsec_debug_log

# Only accept request encodings we know how to handle
# we exclude GET requests from this because some (automated)
# clients supply “text/html” as Content-Type
SecFilterSelective REQUEST_METHOD “|^(GET|HEAD)$” chain
SecFilterSelective HTTP_Content-Type \
“|(^application/x-222-form-urlencoded$|^multipart/form-data;)”

# Do not accept GET or HEAD requests with bodies
SecFilterSelective REQUEST_METHOD *^(GET|HEAD)$” chain
SecFilterselective HTTP_content-Length “!^$”

# Require Content-Length to be provided with
# every POST request
SecFilterSelective REQUEST_METHOD “^POST$” chain
SecFilterSelective HTTP_Content-Length ‘^$”

# Don’t accept transfer encodings we know we don’t handle
SecFilterSelective HTTP_Transfer-Encoding “!^$”

There are a couple directives you must configure in the httpd.conf file to ensure that the Apache web server runs using the unprivileged user account we established earlier, among other things. Inspect your httpd.conf file to verify that the following statements appear as show in the following. Recall that we decided to run Apache as wwwusr:wwwgrp.

User wwwusr
Group wwwgrp

Also, configure the serverAdmin directive with a valid alias e-mail address such as the following:

ServerAdmin hostmasteryoursecuredomain.com

This will provide a point of contact for your customers, should they experience problems with your site.

Performance-Tuning Directives in httpd.conf

there are a number of performance-tuning directives in the Apache httpd.conf file. As a security professional, you should interpret those directives as DoS prevention statements, since they control resource allocation for users of the Apache server. The following directives control the performance of an Apache server:

  • Timeout: Configures the time Apache waits to receive GET requests, the time between TCP packets for POST or PUT requests, or the time between TCP ACK statements in responses. The Apache default is 300 seconds (3 mintues), but you might want to consider reducing this timer to 60 seconds to mitigate DoS attacks.
  • KeepAlive: Configures HTTP1.1-compliant persistency for all web requests. By default, this is set to On and should remain as such to streamline web communication.
  • KeepAliveTimeout: Determines the maximum time to wait before closing an inactive, persistent connection. Here we will keep this value att the default of 15 seconds, since raising it can cause performance problems on busy servers and expose you to DoS attacks.
  • StartServers: Designates the number of child processes to start when Apache starts. Setting this value higher than the default of 5 can increase server performance, but use care not to set the value too high, because doing so could saturate system resources.
  • MinSpareServers: This setting, like the MaxSpareServers setting, allows for dynamic adjustment of Apache child processes. MinSpareServers instructs Apache to maintain the specified number of idle processes for new connections. This number should be relatively low except on very busy servers.
  • MaxSpareServers: Maintains Apache idle processes at the specified number. Like MinSpareServers, the value should be low, except for busy sites.
  • MaxClients: As its name implies, this setting determines the maximum number of concurrent requests to the Apache server. We will leave this as the default value of 256.

Once you’ve finished editing this section of your httpd.conf, you should see something similar to the following:
Timeout 60
KeepAlive On
KeepAliveTimeout 15
StartServers 5
MinSpareServers 10
MaxSpareServers 20
MaxClients 256

By default, Apache informs web users of its version number when delivering a 404 (page not found) error. Since it is good practice to limit the information you provide to would-be hackers, we will disable this feature. Recall that we already altered the Apache server signature and that we installed mod_security. Both of these actions should be enough to obfuscate our server because they both alter the default behavior. If you would like to turn off server signatures completely, you can always set the ServerSignature directive to Off and the Server tokens to Prod. this will disable Apache signatures entirely.

The Apache web server includes mechanisms to control access to server pages and functionality. The statement syntax is part of the directive and is fairly straightforward: you specify a directory structure, whether default access is permitted or denied, and the parameters that enable access to the directory if access is denied by default. There are many options for fine-grained control that you should learn by reading the Directory Directive section of the Apache Core Features document in the current version of the Apache documentation.

Regardless of the access you provide to your customers, you should secure the root file system using access control before placing your server into a production environment. In you httpd.conf file, you should create a statement in the access control directives area as follows:

<Directory />

Order, Deny, Allow
deny from all

</Directory>

This statement will deny access to the root file system should someone intentionally or accidentally create a symlink to /.

In the next article, we will discuss further hardening our Apache server using authentication mechanisms.

External Links:

The official Apache website

The post Apache Server Hardening: Part Four (httpd.conf) appeared first on pfSense Setup HQ.

Apache Server Hardening: Part Five

$
0
0

Apache server

Apache User Authentication

Apache also includes several ways in which you can authenticate customers using your web server such as LDAP, SecureID, and basic .htaccess, to name a few examples. To use authentication mechanisms beyond basic .htaccess, you must compile additional functionality when you’re building Apache. Like access control, authentication mechanisms are specified as part of the directive.

The two steps to enabling basic .htaccess user authentication are:

  1. Creating an htpasswd file to store user credentials.
  2. Adding a directive to the httpd.conf file to protect a directory structure.

This is different than adding a login form on a web page and creating your own authentication. Let’s use an example to demonstrate how easy it can be to add authentication. In our example, we will secure a directory called /securedir and permit only customers Homer and Marge access to the files in that directory.


First, let’s create an htpasswd file somewhere not in the web server document root by issuing the following command:

htpasswd -c /usr/local/apache/passwdfile homer
New password: *****
Re-type new password: *****
Adding password for user homer

Next, we’ll add Marge to the list as well. This time we don’t need to use the -c argument, since our htpasswd file already exists:

htpasswd /usr/local/apache/passwdfile marge
New password: *****
Re-type new password: *****
Adding password for user marge

Now that we’ve established our customer accounts, we’ll finish by adding a directive to the httpd.conf file to protect the /securedir directory as follows:

<Directory /usr/local/apache/htdocs/secure>
AuthType Basic
AuthName “Access for authenticated customers only”
AuthUserfile /usr/local/apache/passwdfile
require user marge homer

</Directory>

Now, when anyone attempts to access the /securedir directory, they’ll be prompted for a username and password. Because we specifically require only Marge and Homer, only they will be permitted to use the directory structure, if they authenticate properly.

You can also restrict access based on a domain or IP address. The following directive will do this:

Order deny, allow
Deny from all
Allow from allowable-domain.com
Allow from XXX.XXX.XXX
Deny from evil-domain.com

You can specify the first three (or one or two) octets of an IP address defining the allowable domain.

Although this example involves modifying the httpd.conf file to control directory access, there is another way. You can create an .htaccess and .htpasswd file in the directory to which you want to control access. The .htaccess file should contain the same directive we described above. The .htpasswd file must be created using htpasswd. In the above example, to add access for Homer and Marge, we would first create (or clobber if it already exists) the password file /securedir/.htpasswd:

htpasswd -c .htpasswd homer

Now that we have created .htpasswd, we can add user marge to the existing password file (which contains one user, homer):

htpasswd .htpasswd marge

Within the directive is a subdirective called Options that controls functionality for the directory structures specified in the directive. The available options are listed below:

Option Functionality
All Default setting; includes all options except MultiViews
ExecCGI Permits CGI script execution through mod_cgi
FollowSymLinks Allows Apache to follow OS file system symlinks
Includes Permits SSI through mod_include
IncludesNOExEC Permits SSI but denies exec and exec cgi
Indexes Allows autoindexing using mod_autoindex if no configured index file is present
MultiViews Permits content negotiation using mod_negotiation
SimLinksIfOwnerMatch Allows Apache to follow OS file system symlinks but only if the link and target file have the same owner

Many of the listed options are not relevant to our installation, since we disabled Includes and CGI during compile time. Regardless, a good default directive disabling most options is shown here:

<Directory “/usr/local/apache/htdocs”>
Order, allow, deny
Allow from all
Options -FollowSysLinks -ExecCGI -Includes -Indexes \
-MultiViews
AllowOverride None

</Directory>

At this point, your Apache server should be relatively secure. In the next article, we will discuss some Apache logging directives so that we can better monitor our server.

External Links:

Authentication and Authorization at the official Apache website

Apache Web Login Authentication at yolinux.com

The post Apache Server Hardening: Part Five appeared first on pfSense Setup HQ.


Apache Server Hardening: Part Six

$
0
0

Apache server

Additional Directives

Within the directive is a subdirective called Options that controls functionality for the directory structures specified in the directive. The available options are listed below.

Option Functionality
All Default setting; includes all options except MultiViews
ExecCGI Permits CGI script execution through mod_cgi
FollowSymLinks Allows Apache to follow OS file symlinks
Includes Permits SSI through mod_include
IncludeNOEXEC Permits SSI but denies exec and exec cgi
Indexes Allows autoindexing using mod_autoindex if no configured index file is present
MultiViews Permits content negotiation using mod_negotiation
SimLinksIfOwnerMatch Allows Apache to follow OS file system symlinks but only if the link and target file have the same owner

Many of the listed options are not relevant to our installation, since we disabled Includes and CGI during compile time. Regardless, here is a good default directive disabling most options:

<Directory “/usr/local/apache/htdocs”>
Order allow,deny
Allow from all
Options -FollowSysLinks -ExecCGI -Includes -Indexes \
-MultiViews
AllowOverride None

</Directory>

At this point, your Apache server should be relatively secure. Now, we move on to configuring logging options.


There are many reasons to configure logging on you Apache server. Whether helping you see top page hits, hours of typical high volume traffic, or simply understanding who is using your system, logging plays an important part of any installation. More importantly, logging can provide a near-real-time and historic forensic toolkit during or after security events.

to ensure that your logging directives are set up correctly, we will provide an example of the logging options in the Apache web server. Apache has many options with which you should familiarize yourself by reading the Apache mod_log_config documentation page. This will help you understand the best output data to record in logs. Also, recall that we compiled Apache with mod_log_forensic, which provides enhanced granularity and logging before and after each successful page request.

An example logging configuration file is shown here:

ErrorLog /var/log/apache/error.log
LogLevel Info
 Logformat “%h %l %u %t \”%r\” %>s %b \”%{Referer}i\” \”%{User-Agent}i\”\”%{forensic-id}n\” %T %v” full
CustomLog /var/log/apache/access.log combined
ForensicLog /var/log/apache/forensic.log

The example provides a customized logging format that includes detailed output and places all the log files in the /var/log/apache directory.

After you have installed and configured your Apache server, you will need to do some quick cleanup of files that could represent a security threat. In general, you should not leave the source code you used to compile Apache on the file system. It is a good idea to tar the files up and move them to a secure server. Once you’ve done so, remove the source code from the Apache web server.

Removing Directories and Setting Permissions

You’ll also want to remove some of the default directories and files installed by the Apache web server. To do so, execute the following commands on your web server. If you have added content into your document root directory, you will want to avoid the first command:

rm -fr /usr/local/apache/htdocs/*
rm -fr /usr/local/apache/cgi-bin
rm -fr /usr/local/apache/icons

After removing files, let’s ensure that our Apache files have proper ownership and permissions before starting our server.

As we discussed previously, the Apache web server should be run as an unprivileged and unique account. In our example, we used the user wwwusr and the group wwwgrp to run our server. Let’s make sure our permissions are properly set by running the following commands:

chown -R root:wwwgrp /usr/local/apache/bin
chmod -R 550 /usr/local/apache/bin
chown -R root:wwwgrp /usr/local/apache/conf
chmod -R 660 /usr/local/apache/conf
chown -R root:wwwgrp /usr/local/apache/logs
chmod -R 664 /usr/local/apache/logs
chown -R root /usr/local/apache/htdocs
chmod -R 664 /usr/local/apache/htdocs

Monitoring Your Server

Even with the best defenses and secure configurations, breeches in your systems and applications could occur. Therefore, you cannot simply set up a hardened Apache web server and walk away thinking that everything will be just fine. Robust and comprehensive monitoring is perhaps the most important part of securely operating servers and applications on the Internet.

In Apache, there are several things to consider that will help you to identify and react to potential threats. Your primary source of data will be through Apache and OS logs. Even with small web sites, however, sifting through this information can be a challenge. One of the first things to consider is intergrating your Apache logs with other tools to help organize and identify potential incidents within the log file. Many open source and commercial products are available to aid you in securing your site. One such open source tool is called Webalizer, available at the http://www.webalizer.org/, which features graphical representation of your Apache log file contents.

SNMP polling and graphing constitute another methodology commonly employed for secure monitoring. Often, it is extremely difficult to gauge the severity or magnitude of an even without visualization of data from logs or SNMP counters. One tool you might consider using is a module called mod_apache_snmp, available at Sourceforge. The module can provide real-time monitoring of various metrics including, but not limited to:

  • Load average
  • Server uptime
  • Number of errors
  • Number of bytes and requests served

You might consider other commercial SNMP-based solutions especially for enterprise-scale deployments. These tools help expedite monitoring deployment and usually include enhanced functionaility to automatically alter you when important thresholds, such as web site concurrent connections, are crossed.

External Links:

The official Apache web site

The official Webalize web site

The official Mod-Apache-Snmp web site

The post Apache Server Hardening: Part Six appeared first on pfSense Setup HQ.

whois and dig Commands

$
0
0

whoisThe whois Command

The whois command is useful when trying to track down a contact for someone causing trouble on your network. This command queries the primary domain name servers and returns all the information that Internic (or whoever their name registrar is) has. Internic used to be the quasi-government agency that was responsible for keeping track of all the domain names on the Internet. Internic became a commercial company called Network Solutions, and was then acquired by VeriSign. Now that name registration has been opened up for competition, there are literally dozens of official name registrars. However, you can still usually find out who owns a domain by using the whois command.

This command is useful for attacks coming both from within companies or within ISP networks. Either way, you can track down the person responsible for that network and report your problems to them. They won’t always be helpful, but at least you can try. The syntax is:

whois domain-name.com

The variable domain-name.com is the domain name on which you are looking for information.

As an example, here’s the whois information for linux.com:

Domain Name: LINUX.COM
Registry Domain ID:
Registrar WHOIS Server: whois.domain.com
Registrar URL: www.domain.com
Updated Date: 2013-05-08 13:51:05
Creation Date: 1994-06-02 04:00:00
Registrar Registration Expiration Date: 2016-06-01 04:00:00
Registrar: Domain.com, LLC
Registrar IANA ID: 886
Registrar Abuse Contact Email: compliance@domain-inc.net
Registrar Abuse Contact Phone: +1.6027165396
Reseller: Dotster.com
Reseller: support@dotster-inc.com
Reseller: +1.8004015250
Domain Status: clientTransferProhibited
Domain Status: clientUpdateProhibited
Registry Registrant ID:
Registrant Name: Jim Zemlin
Registrant Organization: The Linux Foundation
Registrant Street: 660 York Street Suite 102
Registrant City: San Francisco
Registrant State/Province: CA
Registrant Postal Code: 94110
Registrant Country: US
Registrant Phone: +1.4157239709
Registrant Phone Ext:
Registrant Fax: +1.4157239709
Registrant Fax Ext:
Registrant Email: admin@linux-foundation.org
Registry Admin ID:
Admin Name: Jim Zemlin
Admin Organization: The Linux Foundation
Admin Street: 660 York Street Suite 102
Admin City: San Francisco
Admin State/Province: CA
Admin Postal Code: 94110
Admin Country: US
Admin Phone: +1.4157239709
Admin Phone Ext:
Admin Fax: +1.4157239709
Admin Fax Ext:
Admin Email: admin@linux-foundation.org
Registry Tech ID:
Tech Name: Jim Zemlin
Tech Organization: The Linux Foundation
Tech Street: 660 York Street Suite 102
Tech City: San Francisco
Tech State/Province: CA
Tech Postal Code: 94110
Tech Country: US
Tech Phone: +1.4157239709
Tech Phone Ext:
Tech Fax: +1.4157239709
Tech Fax Ext:
Tech Email: admin@linux-foundation.org
Name Server: NS1.LINUX-FOUNDATION.NET
Name Server: NS2.LINUX-FOUNDATION.NET
DNSSEC: Unsigned
URL of the ICANN WHOIS Data Problem Reporting System: http://wdprs.internic.net/
>>> Last update of WHOIS database: 2013-05-08 13:51:05 <<<

Registration Service Provider:
Dotster.com, support@dotster-inc.com
+1.8004015250
This company may be contacted for domain login/passwords,
DNS/Nameserver changes, and general domain support questions.

As you can see, you can contact the technical person in charge of that domain directly. If that doesn’t work, you can always try the administrative person. The whois command usually displays an e-mail address, a mailing address, and sometimes phone numbers. It tells when the domain was created and if they’ve made recent changes to their whois listing. It also shows the domain name servers responsible for that domain name. Querying these numbers with the dig command can generate even more information about the remote network’s configuration.


Unfortunately, whois is not built into the Windows platforms, but there are plenty of web-based whois engines, including the one located on Network Solutions web site.

It should be noted that if you administer domains of your own, you should make sure your whois listing is both up-to-date and as generic as possible. Putting real e-mail addresses and names in the contact information fields gives information that an outsider can use either for social engineering or password-cracking attacks. Also, people might leave the company, making your record outdated. It is better to use generic e-mail addresses, such as dnsmaster@example.com or admin@example.com. You can forward these e-mails to the people responsible, and it doesn’t give out valuable information on your technical organization structure.

The dig Command

The dig command queries a name server for certain information about a domain. Dig is an updated version of the nslookup command, which had be depricated (but has since been revived). You can see it to determine the machine names used on a network, what the IP addresses tied to those machines are, which one is their mail server, and other useful tidbits of information. The general syntax is:

dig @server domain type

where server is the DNS server you want to query, domain is the domain you are asking about, and type is the kind of information you want on it. You will generally want to query the authoritative DNS for that domain: that is, te one listed in their whois record as being the final authority on that domain. Sometimes the company runs this server; other times its ISP runs the server.

Results of the dig command can yield valuable information, such as the host name of their mail server, their DNS server, and other important machines on their network. If you run a DNS server, you should be able to configure it to respond only to these kinds from authorized machines.

dig Record Types

Options Descriptions
AXFR Attempts to get the whole file for the domain or “zone” file. Some servers are now configured not to allow zone file transfers, so you may have to ask for specific records.
A Returns any “A” records. “A” records are individual host names on the network, such as webserver.example.com and firewall1.example.com.
MX Returns the registered mail host name for that domain. This is useful if you want to contact an administrator (try administrator@mailhost.example.com or root@mailhost.example.com).
CNAME Returns any CNAMED hosts, also known as aliases. For example: fido.example.com = www.example.com.
ANY Returns any information it can generate on the domain. Sometimes this works when AXFR doesn’t

External Links:

The whois protocol at Wikipedia

The dig command at Wikipedia

The post whois and dig Commands appeared first on pfSense Setup HQ.

Nlog: A Utility for Analyzing Nmap Logs

$
0
0

nlogIn a previous article, we covered the Nmap utility. You can save Nmap logs in a number of formats, including plain text or machine-readable, and import them into another program. However, if these options aren’t enough for you, Nlog can help you make sense of your Nmap output. Running it on very large networks can be a lifesaver, because perusing hundreds of pages of Nmap output looking for nefarious activity can be tedious.

The Nlog program helps you organize and analyze your Nmap output. It presents them in a customizable web interface using CGI scripts. Nlog makes it easy to sort your Nmap data in a single searchable database. On larger networks, this kind of capability is vital to making Nmap useful. H.D. Moore put together these programs and made them available. You can find more information about Nlog at securiteam.com. You can download Nlog at packetstormsecurity.com.

Nlog is also extensible. You can add other scripts to provide more information and run additional tests on the open ports it finds. The author provides several of these add-ons and instructions on how to create your own. Nlog requires Perl and works on log files generated by Nmap 2.0 and higher.

Installing Nlog

Follow these steps to install and prepare Nlog:

  1. Download the files from the Nlog web site.
  2. Unpack the Nlog files using the tar-zxvf command. It will unzip and neatly organize all the files for Nlog in a directory called nlog-1.6.0 (or other numbers, depending on the version number).
  3. You can use the installer script provided to automatically install and prepare the program. Note that you need to edit the program before you run it. Go to the Nlog directory and, using a text editor program such as vi or emacs, open the file installer.sh and enter the variables where indicated for you system. Edit the following parameters with the correct values for your installation.
    CGIDIR=/var/www/cgi/
    HTMLDIR=/var/www/
    

    Put the path to your CGI directory. The above represents the correct values on a default Mandrake installation. Make sure you enter the correct ones for your system. For other Linux systems, find the path to this directory by using the locate command. This useful command will find any files with the text you insert after it.

  4. Save the file, then run it by typing:
    ./install.sh

    The installation script automatically copies the CGI files to your CGI directory and the main HTML file to your HTML directory. It also changes the permissions on those files so they can be executed by your web browser.

  5. For the final step, go into the /html directory and edit the nlog.html file. In the POST statement, change the reference to the cgi files to your cgi files, which should be the same one used above (/var/www/cgi/). Save the file and you are ready to go.


Running Nlog

Nlog can be used as follows:

  1. The first thing you must do is create a Nlog database file to view. You do this by converting an existing Nmap log file. Make sure you save your Nmap logs with the machine-readable option (-m on the command line) to be able to use them in Nlog. You can then use a script provided with Nlog to convert the Nmap log into the database format that Nlog uses. To convert a Nmap machine readable log, run the log2db.pl script using this command:
    Ip2db.pl logfile 
    

    Replace logfile with your log file name and location.

  2. To combine multiple log files into a single database, use the following commands:
    cat * > /PATH/temp.db
    cat * > /PATH/temp.db | sort -u > /PATH/final.db
    
  3. Replace /PATH with the path to your Nmap files and final.db with the name you want to use for the combined Nmap database. This sorts the files into alphabetical order and eliminates any duplicates.
  4. Start your web browser and go to the web directory (/var/www/ from the previous section).
  5. Select the Nmap database file you want to view and click Search.
  6. You can now open your Nmap database and sort it based on the following criteria:
    • Hosts by IP address
    • Ports by number
    • Protocols by name
    • State (open, closed, filtered)
    • OS match

    You can also use any combination of these criteria. For example, you could search for any web servers (http protocol) on Windows systems with a state of open.

In the next article, we will look at Nlog add-ons and creating Nlog extensions.

External Links:

Download Nlog at packetstormsecurity.com

2003 archive of secureaustin.com (the former official site of H.D. Moore, creator of Nlog)

The post Nlog: A Utility for Analyzing Nmap Logs appeared first on pfSense Setup HQ.

Nlog Add-Ons and Extensions

$
0
0

NlogIn the previous article, we discussed installing and using Nlog. In this article, we will discuss using add-ons and writing your own Nlog extensions.

Nlog Add-Ons

As mentioned earlier, Nlog is easily extensible and you can write add-ons to do other tests or functions on any protocols or ports found. In fact, there are several included with the program. If there is an add-on available, there will be a hypertext line next to the port and you can click on it to run the subprogram.

Nlog Built-in Extensions

Extensions Descriptions
Nlog-rpc.pl This add-on takes any RPC services that are found and attemps to find out if there are any current RPC attachments and exports for that service
Nlog-smb.pl For any nodes running NetBIOS, this script tries to retrieve shares, user lists, and any other domain information it can get. It uses the user name and login specified in the nlog-config.ph file.
Nlog-dns.pl This script runs a standard nslookup command on the IP address.
Nlog-finger.pl This runs a query against any finger service found running to see what information is sent.

If you examine these add-on scripts, you will observe that they are all just basic Perl programs. If you are experienced with Perl, you can write your own extensions to execute just about any function against your scanned hosts. For example, you can retrieve and display the HTTP header for any web servers found so you can more easily idenfiy it. You don’t need to go overboard with this, because programs like Nessus can do much more comprehensive testing, but if you just need a banner or some small bit of information, then using Nlog is a good solution.


Nlog comes with a sample custom add-on called nlog-bind.pl. This scrupt is designed to poll a DNS server and tell you what version of BIND (the Berkeley Internet Naming Domain) it is running. However, this script is not finished; it is provided as an exercise to create your own add-ons. The sample script is in /nlog*/extras/bind/. The following procedure guides you through finishing the script. You can use that format to create any custom script of your own.

  1. Compile the script using the Gcc compiler with the following command from that directory:
    gcc -o bindinfo binfo-wdp.c

    This creates a binary file called bindinfo in that directory.

  2. Copy this binary file to the directory where you are keeping your nlog scripts.
  3. Change the permissions on it to make it executable (remember that you have to be root to issue this command):
    chmod 700 bindinfo
  4. Open your nlog-config.ph file in a text editor.
  5. Add this line:
    $bindinfo = "/path/to/bindinfo";

    Replace path/to/bindinfo with the location where you put the binary file.

  6. Save this file.
  7. Now edit nlog-search.pl. This is te Perl script that creates your search results page.
  8. Find the section that looks like this:
    1: # here we place each cgi-handler into a temp var for readability.
    2: 
    3: $cgiSunRPC = "sunrpc+$cgidir/nlog-rpc.pl+SunRPc";
    4: $cgiSMB = "netbios-ssn+$cgidir/nlog-smb.pl+NetBIOS";
    5: $cgiFinger = "finger+$cgidir/nlog-finger.pl+Finger";
    6:
    7: $qcgilinks = "$cgiSunRPc $cgiSMB $cgifinger";
  9. Between lines 5 and 6, add a line that looks like:
    $cgiBIND = "domain+cgidir/nlog-bind.pl+BIND";
  10. Edit line 7 to look like this:
    $qcgilinks = "$cgiSunRPC $cgiSMB $cgiFinger $cgiBIND";

    Line 7 is also where you would add, in a similar fashion, links to any other scripts you had created.

  11. Copy the nlog-bind.pl file from this directory into your cgi-bin directory (/var/www/cgi on Mandriva), and change the permissions (chmod0 so the application can read it.

Now when your Nmap scans find port 53 open (which is generally a DNS server), you can click on the link that Nlog creates and find out what version of BIND is running. You can write additional scripts to extend Nlog by following the logic in this example.

External Links:

Download Nlog at packetstormsecurity.com

2003 archive of secureaustin.com (the former official site of H.D. Moore, creator of Nlog)

The post Nlog Add-Ons and Extensions appeared first on pfSense Setup HQ.

Uses for Nlog and Nmap

$
0
0

nlog

Uses for Nlog and Nmap

So now you can port scan with Nmap and sort and analyze the results with Nlog. what can you do with these programs? There are, indeed, some interesting applications for port scanners. Here are some examples for you to try on your network:

      1. Scan for the least common services: if you have a service or port number that is only showing up on one or two machines, chances are that it is not something that is standard for your network. It could be a Trojan horse or a banned service (e.g. a file-sharing application). It could also be a misconfigured machine running an FTP server or other type of public server. You can set Nlog to show the number of occurrences of each and sort them by the least often occurring. This will generate a list for you to check. You probably won’t want to include your company’s servers in this scan as the will have lots of one-of-a-kind services running. However, it would not hurt to scan these servers separately either to fine-tune or eliminate extraneous services.
      2. Hunt for illicit/unknown web servers: Chances are that if you run one or more web servers for your company, you will see the HTTP services showing up a few times on your network. However, it is also likely that you will see it on machines where you don’t expect it. Some manufacturers of desktop computers are now loading small web servers by default on their systems for use by their technical support personnel. Unfortunately, these web servers are often barebones programs with security holes in them. You will also find web servers running on printers, routers, firewalls, and even switches and other dedicated hardware. You may need these servers to configure the hardware, but if you aren’t using these servers, you should shut them off. These mini-servers are often configured with no password protection by default and can offer a hacker a foothold onto that machine. They can also offer access to the files on the machines if an intruder knows how to manipulate them. Scan for these hidden web servers, and either turn them off or properly protect them. you should also search for ports other than 80 that are commonly used for HTTP. At the end of this article, there is a table listing some of those ports.
      3. Scan for servers running on desktops: Going a step further with the last exercise, restrict the IP range to only those that are nonserver machines and set a port range from 1 to 1024. This will find desktop machines running services that are normally done by servers, such as mail, web and FTP. Unless there is a good reason for this (e.g. PCAnywhere), your desktop machines should not be running these types of services.
      4. Hunt for Trojan horses: To hunt for Trojan horses on your network, run a scan of your network and translate it into the Nlog database format. Open the Nlog search page, select the ports, and set the range from 30,000 and 65,400. This is the favored range for Trojan horses because it is out of the range of normal services and so they usually will go unnoticed – that is, unless you are port scanning your network. However, just because there are some services running on high-level ports doesn’t always mean you have Trojan horses, but it is worth paying attention to services running on these high port numbers. Once you’ve narrowed it down to the machine and port numbers, you can rule them out by checking the services running on those machines or by SSHing to those port numbers and seeing if you get a service banner.
      5. Check your external network exposure: Put your Nmap box outside your network, either on a dial-up or home broadband connection, and try scanning your company’s public IP addresses. By doing this you will see what services are accessible from the Internet (and thereby to any port scanner-wielding person). This is the most vulnerable part of your network, and you should take extra care to secure any services that are public-facing by using a vulnerability scanner, such as the one described in the next chapter. It will also show if your firewall is properly filtering ports that it is forwarding to internal LAN addresses.
        So you’ve seen all the cool things you can do with a port scanner like Nmap. These programs are useful for finding out what you have running and where your exposures might be. But how do you know if those exposed points might be vulnerable? Or if services that are supposed to be open are safe and secure? That goes beyond the function of a port scanner and into the realm of a vulnerability scanner.


// ]]>

Web Ports

Common Port Number Protocol
81 Alternate web
88 Web
443 HTTPS, secure web
8000-8002 Web
8080 Web
8888 Web

External Links:

Download Nlog at packetstormsecurity.com

2003 archive of secureaustin.com (the former official site of H.D. Moore, creator of Nlog)

The post Uses for Nlog and Nmap appeared first on pfSense Setup HQ.

Viewing all 115 articles
Browse latest View live