How to determine your machine is “Little Endian” or “Big Endian”.

What is big and Little Endian ?

Little and big endian are two ways of storing multibyte data-types ( int, float, etc). In little endian machines, last byte of binary representation of the multibyte data-type is stored first. On the other hand, in big endian machines, first byte of binary representation of the multibyte data-type is stored first.

Big Endian(Wikipedia)

Little Endian(Wikipedia)

Is there a quick way to determine endianness of your machine?
There are n no. of ways for determining endianness of your machine. Here is one quick way of doing the same.

#include <stdio.h>
int main()
{
   unsigned int i = 1;
   char *c = (char*)&i;
   if (*c)   
       printf("Little endian");
   else
       printf("Big endian");
   getchar();
   return 0;
}

In the above program, a character pointer c is pointing to an integer i. Since size of character is 1 byte when the character pointer is de-referenced it will contain only first byte of integer. If machine is little endian then *c will be 1 (because last byte is stored first) and if machine is big endian then *c will be 0.

How to get a “codesigned” gdb on OS X?

Very interesting problem,  I wanted to run gdb on my mac but i was not able to to run it.  Because it was not codeSigned . Here’s the solution.

The Darwin Kernel requires the debugger to have special permissions before it is allowed to control other processes. These permissions are granted by codesigning the GDB executable. Without these permissions, the debugger will report error messages such as:

Starting program: /x/y/foo
Unable to find Mach task port for process-id 28885: (os/kern) failure (0x5).
 (please check gdb is codesigned - see taskgated(8))

Codesigning requires a certificate. The following procedure explains how to create one:

(Note ) : I tried many times creating certificate for gdb, basic problem was while creating certificate, Please Create certificate for “System” not for “login” that is main problem. 

  • Start the Keychain Access application (in /Applications/Utilities/Keychain Access.app)
  • Select the Keychain Access -> Certificate Assistant -> Create a Certificate… menu
  • Then:
    • Choose a name for the new certificate (this procedure will use “gdb-cert” as an example)
    • Set “Identity Type” to “Self Signed Root”
    • Set “Certificate Type” to “Code Signing”
    • Activate the “Let me override defaults” option
  • Click several times on “Continue” until the “Specify a Location For The Certificate” screen appears, then set “Keychain” to “System”
  • Click on “Continue” until the certificate is created
  • Finally, in the view, double-click on the new certificate, and set “When using this certificate” to “Always Trust”
  • Exit the Keychain Access application and restart the computer (this is unfortunately required)

Once a certificate has been created, the debugger can be codesigned as follow. In a Terminal, run the following command…

codesign -f -s  "gdb-cert"  <gnat_install_prefix>/bin/gdb

… where “gdb-cert” should be replaced by the actual certificate name chosen above, and should be replaced by the location where you installed GNAT.

Restoring lost commits in git

Hey i am writing post after a long time.

I was working in git and committed some changes, i forgot to pushed to branch and i forgot the commit too. I reseted the HEAD :(.

So, you just did a git reset --hard HEAD^ and threw out your last commit. Well, it turns out you really did need those changes. . Don’t fear, git should still have your commit. When you do a reset, the commit you threw out goes to a “dangling” state. It’s still in git’s datastore, waiting for the next garbage collection to clean it up. So unless you’ve ran a git gc since you tossed it, you should be in the clear to restore it.

$ git show-ref -h HEAD
  7c61179cbe51c050c5520b4399f7b14eec943754 HEAD

$ git reset --hard HEAD^
  HEAD is now at 39ba87b Fixing about and submit pages so they don't look stupid

$ git show-ref -h HEAD
  39ba87bf28b5bb223feffafb59638f6f46908cac HEAD

So our HEAD has been backed up by one commit. At this point if we wanted it back we could just git pull, but we’re assuming that only our local repository knows about the commit. We need the SHA1 of the commit so we can bring it back. We can prove that git knows about the commit still with the fsck command:

$ git fsck --lost-found
  [... some blobs omitted ...]
  dangling commit 7c61179cbe51c050c5520b4399f7b14eec943754

You can also see the that git knows about the commit still by using the reflogcommand:

$ git reflog
  39ba87b... HEAD@{0}: HEAD~1: updating HEAD
  7c61179... HEAD@{1}: pull origin master: Fast forward
  [... lots of other refs ...]

So, we now have our SHA1: 7c61179. If we want to get immediately apply it back onto our current branch, doing a git merge will recover the commit:

$ git merge 7c61179
  Updating 39ba87b..7c61179
  Fast forward
    css/screen.css |    4 ++++
    submit.html    |    4 ++--
    2 files changed, 6 insertions(+), 2 deletions(-)

This command will bring your lost changes back and make sure that HEAD is pointing at the commit. From here you can continue to work as normal! You could also checkout the SHA1 into a new branch, but really a merge is the fastest and easiest way to restore that lost commit once you have the hash. If you have other ways let us know in the comments!

Thanx to gitReady for this valuable post.

Recovering from Broken Grub

On Friday, i was trying to down-grade Grub to grub-legacy. So installed grub-legacy, i knew i was playing with bootloader. When i restart my OS, as expected grub was not able to find out the OS. Problem became more worsen when i came to know, i didn’t installed stage1, stage1.5 and stage2 scripts means i didn’t ran commands(grub-mkconfig).

Dos grub didn’t had grub-install, 

Grub Error

So i googled didn’t found any solution. I read from different blog, websites and tried this.

  1. Use any live os and run grub-install
First Mount the partition where OS is installed. You can find the partition by running. 

#$ blkid
/dev/sda1: UUID="ee51f4e9-1ef8-4b65-8ef4-299600e8cbf4" TYPE="ext4" PTTYPE="dos" PARTUUID="c679c6ed-01" 

/dev/sda2: UUID="cb97ec88-4282-459a-852f-f619138d46d9" TYPE="ext4" PARTUUID="c679c6ed-02"

then run 
sudo mount /dev/sda1 /mnt 
(Make sure partition in write mode)
mount -o remount, rw /dev/sda2
(Here sdb3 where OS is installed)

grub-install --target=/mnt --recheck /dev/sda2

Now Scripts are installed reboot the machine.(Most probably you will get a grub black screen)

Now you have to do 3 things

a. Find the partitions. 

ls

it will show you how many partitions are here,  here you may get like

(hd0) (hd0,5) (hd0,1) (hd1) (hd1,1) (hd1,2) (fd0) (hd0,msdos1) (hd0, msdos2)

Then run

ls /(hd0,0)

and observe the output, if you are getting Linux root(where folders like etc, boot are present) then this is your root.

b. Set the root

root (hd0,0)

Here (hd0,0) Explained Here.

  • The brackets are a must; all devices listed in GRUB menu must be enclosed in brackets.
  • hd stands for hard disk; alternatively, fd stands for floppy disk, cd stands for CD-ROM etc.
  • The first number (integer for geeks) refers to the physical hard drive number; in this case, the first drive, as they are counted from zero up. For example, hd2 refers to the third physical hard drive.
  • The second number refers to the partition number of the selected hard drive; again, partitions are counted from zero up. In this case, 1 stands for the second partition.

From here, it is evident that GRUB (menu) does not discriminate between IDE or SCSI drives or primary or logical partitions. The task of deciding which hard drive or partition may boot is left to BIOS and Stage 1. As you see, the notation is very simple.

Primary partitions are marked from 0 to 3 (hd?,0), (hd?,1), (hd?,2), (hd?,3). Logical partitions in the extended partition are counted from 4 up, regardless of the actual number of primary partitions on the hard disk, e.g. (hd1,7).

For me I guessed, i tried like setting up the root, like above mentioned then. used grub’s ls command if ls /boot+tab shows any thing that partition where you have to install actually re-install your Grub. 

c. Load the kernel

kernel /boot/vmlinux-linux  ro root=/dev/sda2

d. Load the Linux img

initrd /boot/vmlinux-linux-lts.img

Then Run

boot

You will be able to boot the desired OS. [1]

Link Aggregation LAG(IEEE 802.3ad)

Yesterday my colleague asked me about LAG, whats the meaning of LAG and what’s the use of it?

What does Link Aggregation (LAG) mean?

Link aggregation (LAG) is used to describe various methods for using multiple parallel network connections to increase throughput beyond the limit that one link (one connection) can achieve. For link aggregation, physical ports must reside on a single switch. Split Multi-Link Trunking (SMLT) and Routed-SMLT (RSMLT) remove this limitation and physical ports are allowed to connect/split between two switches. This term is also known as Multi-Link Trunking (MLT), Link Bundling, Ethernet/Network/NIC Bonding or NIC teaming.

Link Aggregation (LAG) :

Link aggregation is a technique used in a high-speed-backbone network to enable the fast and inexpensive transmission of bulk data. The best feature of link aggregation is its ability to enhance or increase the network capacity while maintaining a fast transmission speed and not changing any hardware devices, thus reducing cost. Cost Effectiveness LAG is a very common technique for establishing a new network infrastructure using extra cabling above the current requirements. Labor cost is much more than the cost of cabling. Thus, when a network extension is required, the extra cables are used without incurring any additional labor. However, this can be done only when extra ports are available. Higher-Link Availability This is the best feature of LAG. A communication system keeps working even when a link fails. In such situations, link capacity is reduced but data flow is not interrupted. Network Backbone Formerly, there were many techniques used for networking, but IEEE standards are always preferred. LAG supports network load balancing. Different load balancing algorithms are set by network engineers or administrators. Furthermore, network speed is increased by small increments, saving both resources and cost. Limitations With all kinds of implementations, each link and piece of hardware is standardized and engineered to not affect the network efficiency or link speed. Additionally, with single-switching all kind of ports (802.3ad, broadcast, etc.) must reside on a single switch or the same logical switch.

How to setup LAG in linux BOX

Thanx to Techopedia

Seconds Since the “Epoch”

I was supposed to write a RT (Real Time) logging which doesn’t call a single Linux CALL.
I had only seconds from 1st Jan 1970 (Called Eposh).

A value that approximates the number of seconds that have elapsed since the Epoch. A Coordinated Universal Time name (specified in terms of seconds (tm_sec), minutes (tm_min), hours (tm_hour), days since January 1 of the year (tm_yday), and calendar year minus 1900 (tm_year)) is related to a time represented as seconds since the Epoch, according to the expression below.

If the year is <1970 or the value is negative, the relationship is undefined. If the year is>=1970 and the value is non-negative, the value is related to a Coordinated Universal Time name according to the C-language expression, where tm_sec,  tm_min,  tm_hour,  tm_yday,  and  tm_year are all integer types:

tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 +
    (tm_year-70)*31536000 + ((tm_year-69)/4)*86400 -
    ((tm_year-1)/100)*86400 + ((tm_year+299)/400)*86400

The relationship between the actual time of day and the current value for seconds since the Epoch is unspecified.

How any changes to the value of seconds since the Epoch are made to align to a desired relationship with the current actual time is implementation-defined. As represented in seconds since the Epoch, each and every day shall be accounted for by exactly 86400 seconds.

Note:
The last three terms of the expression add in a day for each year that follows a leap year starting with the first leap year since the Epoch. The first term adds a day every 4 years starting in 1973, the second subtracts a day back out every 100 years starting in 2001, and the third adds a day back in every 400 years starting in 2001. The divisions in the formula are integer divisions; that is, the remainder is discarded leaving only the integer quotient.

You can convert epoch Seconds to current time please look at this LINK.

Xenomai Timer

Xenomai Timer :Xenomai has two time sources: the sytem timer, which counts the number of nanoseconds since 1970, and a hardware dependent high resolution counter which counts the time since an unspecified point in time (usually the system boot time). This hardware dependent high resolution counter is called “tsc” on a PC, and gave its name to Xenomai native API calls.rt_timer_tsc returns the value of this hardware dependent high-resolution counter.
rt_timer_info returns the same thing in the tsc member of the RT_TIMER_INFO structure, and the value of the system timer at exactly the same time as when the high-resolution counter was read.

This allows to have a correspondence between the two time sources.

rt_alarm_inquire is not related to this and returns some information
about a given alarm. Now, if you allow me, a little advice for the implementation of a “timer library”: you could be tempted to create only one periodic alarm object with Xenomai, and to manage a timer list yourself. Don’t do this. Creating an alarm object for each timer library object make Xenomai aware of the existence of all your application timers, this has several

advantages:
– it gives you information about all your timers in /proc/xenomai
– it allows Xenomai to use its anticipation algorithm for all your timers
– if you are concerned about the scalability of Xenomai timers list
management, you can check the options in the “Scalability” menu of
Xenomai configuration menu (“Real-time subsystem” sub-menu of kernel
configuration menu).
more about timers

Xenomai POSIX skin supports two clocks:
CLOCK_REALTIME maps to the nucleus system clock, keeping time as the amount of time since the Epoch, with a resolution of one system clock tick.

CLOCK_MONOTONIC maps to an architecture-dependent high resolution counter, so is suitable for measuring short time intervals. However, when used for sleeping (with clock_nanosleep()), the CLOCK_MONOTONIC clock has a resolution of one system clock tick, like the CLOCK_REALTIME clock.[1]

OpenSSL “HeartBleed” Bug

                          

Monday afternoon, the IT world got a very nasty wakeup call, an emergency security advisory from the OpenSSL project warning about an open bug called “Heartbleed.” The bug could be used to pull a chunk of working memory from any server running their current software. There was an emergency patch, but until it was installed, tens of millions of servers were exposed. Anyone running a server was suddenly in crisis mode.

If the “Heartbleed” name sounds dramatic, this bug seems to live up to the hype. It’s already far worse than the GoToFail bug that embarrassed Apple earlier this year, both by the scale of computers affected and the depth of the breach. The new bug would let attackers pull the private keys to the server, letting attackers listen in on data traffic and potentially masquerade as the server. Even worse, it’s old: the bug dates back two years, and it’s still unclear how long anyone’s known about it.

OpenSSL isn’t widely known outside of the coding world, but as many as two out of three servers on the web rely on its software. The sudden reveal means anyone involved is now scrambling for a fix. Already, Yahoo has been exposed by the bug, and experts have advised any Yahoo users to steer clear of their accounts until the company has time to update their servers. (A Yahoo representative tells The Verge the core sites are now patched, although the team is still working to implement the fix across the rest of the site.) Dozens of other smaller companies have also reportedly been affected, including Imgur, Flickr, and LastPass (although LastPass says no unencrypted data was exposed). “It is catastrophically bad, just a hugely damaging bug,” says ICSI security researcher Nicholas Weaver.

Discovered by Google researcher Neel Mehta, the bug allows an attacker to pull 64k at random from a given server’s working memory. It’s a bit like fishing — attackers don’t know what usable data will be in the haul — but since it can be performed over and over again, there’s the potential for a lot of sensitive data to be exposed. The server’s private encryption keys are a particular target, since they’re necessarily kept in working memory and are easily identifiable among the data. That would allow attackers to eavesdrop on traffic to and from the service, and potentially decrypt any past traffic that had been stored in encrypted form.

For most privacy tools relying on OpenSSL, the takeaway is catastrophic. A blog post from the Tor Project told users, “if you need strong anonymity or privacy on the internet, you might want to stay away from the internet entirely for the next few days while things settle.” In many cases, a few days may not be enough. It will give services time to patch their servers, but if any private keys were compromised before the patch went up, it would give attackers free rein in the months to come. Servers can reset their certificates, but it’s slow and expensive, and experts suspect many of them may simply assume the patch is enough. “I bet that there will be a lot of vulnerable servers a year from now,” Weaver says. “This won’t get fixed.”

Apple, Google and Microsoft appear to be unaffected, along with the major e-banking services. Yahoo, on the other hand, was affected and leaking user credentials for a significant portion of the day before its core sites were fixed. More generally, any server running OpenSSL on Apache or Nginx will be affected, which implicates a huge variety of everyday websites and services.

For now, there are a few ways users can tell which services are safe — but the news isn’t reassuring. This site, built by developer Filippo Valsorda, offers a spot-check as to which services are currently unpatched, but the site’s code is also producing false negatives, so it shouldn’t be taken as definitively ruling anything out. Any patched server will also need to generate new SSL certificates to make sure attackers can’t use keys that were exposed in the breach. To check, use an SSL tracker like this one and look for a certificate’s “issued on” date, which should be dated after the recent patch. Resetting the certificates will take time and money, but if a compromised site keeps using a compromised certificate, they’ll be leaving themselves open to an attack.

It’s still early to tell what larger changes will be made as a result of the breach, but some lessons are already clear. Despite the vast infrastructure relying on OpenSSL, the open-source project is comparatively underfunded, and some experts have already called for more donations to the project to prevent vulnerabilities like Heartbleed from slipping through the cracks. Perfect Forward Secrecy could also have limited the damage from the bug, preventing decryption after the fact.

But the most troubling lesson might be how hard vulnerabilities are to discover, and how damaging they can be once fully revealed. “These are really subtle bugs,” Weaver says. “You might detect it if you ran it through a memory checker, but this is not the kind of thing that just shows up looking at the code.” That’s a credit to Google, who was rigorous enough to discover the bug — but for anyone relying on secure software, it’s a troubling thought.

Some Comments about clarification of the posts.

  • The answer to this question is “maybe”. The article here has several points wrong, one of them being that the attacker cannot select which data they are going to have leak. The flaw basically blows out a memory register and returns whatever data happens to be in the extra bytes. It could be something, or, in most cases, it could be nothing. To get useful data, you would have to flood the server with TLS “heartbeat” requests.
  • As someone familiar with this tech, I’d say the author of this article did a decent job covering the main bases. The possibilities are huge. Getting a 64k chunk of random memory is pretty significant. There are just over 262,000 64k chunks in a 16GB server. An attacker can muster 262,000 requests in just a few minutes. Once the random jabs have shaken loose pretty much all the contents of the server’s memory, then the attacker can sift through the responses for the private SSL keys. The private SSL keys can be used by the attacker to imitate the host to visiting browsers, which will then submit login info. Having the server’s private SSL key can also greatly simplify a Man-in-the-middle attack on sessions that are supposed to be private between the browser and the server. Finally, the private SSL key also enables previously-captured network traffic to be fully decrypted.
  • The article is confusing about what the potential leakage is. There is the potential that an attacker could get any kind of data that was loaded into memory, which makes the bug nasty. But it’s not like an attacker could just hit the server and pull everything in memory, or download a specific kind of data. They would have to send millions of heartbeat requests just to guarantee that they got any useful data. And it still wouldn’t be guaranteed.I’m not trying to downplay the seriousness of the issue, but it’s probably not likely that people need to freak out about all their data being stolen right now. The serious concern is the loss of server private keys, so people should go through the process of updating their servers and re-keying their SSL certificates to ensure that they are safe.
  • No it doesn’t; it allows an attacker to see the contents of a small portion of RAM immediately following a chunk of RAM that OpenSSL uses. They have no control over what specific portion of memory it is nor what it contains; it could be anything and what it is exactly really depends on a bunch of pretty much random factors.The concern is that there is a possibility that it could contain something important (like the servers private encryption keys) which would obviously be a problem. This is where the verge article delves into sensationalist BS however they can’t write software that is essentially “push button to steal certs” this would take time and resources to exploit in any meaningful way unless the attacker was very lucky. Again, this is basically like trying to steal someone’s password by walking behind their desk hoping that they happen to be typing it in as you walk by.It’s a big deal because relying on someone not getting lucky is not a good form of security, the chances that this was exploited in a meaningful way is almost nil.

All technical knowledge about Open Ssl 1.0.1

An information disclosure vulnerability has been found, and promptly patched, in OpenSSL.

OpenSSL is a very widely used encryption library, responsible for putting the S in HTTPS, and the padlock in the address bar, for many websites.

The bug only exists in the OpenSSL 1.0.1 source code (from version 1.0.1 to 1.0.1f inclusive), because the faulty code relates to a fairly new feature known as the TLS Heartbeat Extension.

The heartbeat extension was first documented in RFC 6520 in February 2012.

TLS heartbeats are used as “keep alive” packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange.

Because the heartbeats consist of a reply and a matching response, they allow either end to confirm not only that the session is open, but also that end-to-end connectivity is working properly.

Sending heartbeat requests

The RFC 6520 standard explicitly restricts the maxium size of a heartbeat request to 214 bytes (16KBytes), but OpenSSL itself generates far shorter requests.

Don’t worry if you don’t understand C; but if you do, the OpenSSL heartbeat request code looks like this:

unsigned int payload = 18; /* Sequence number + random bytes */
unsigned int padding = 16; /* Use minimum padding */

/* Check if padding is too long, payload and padding
* must not exceed 2^14 - 3 = 16381 bytes in total.
*/

OPENSSL_assert(payload + padding <= 16381);

/* Create HeartBeat message, we just use a sequence number
 * as payload to distuingish different messages and add
 * some random stuff.
 *  - Message Type, 1 byte
 *  - Payload Length, 2 bytes (unsigned int)
 *  - Payload, the sequence number (2 bytes uint)
 *  - Payload, random bytes (16 bytes uint)
 *  - Padding
 */

buf = OPENSSL_malloc(1 + 2 + payload + padding);
p = buf;
/* Message Type */
*p++ = TLS1_HB_REQUEST;
/* Payload length (18 bytes here) */
s2n(payload, p);
/* Sequence number */
s2n(s->tlsext_hb_seq, p);
/* 16 random bytes */
RAND_pseudo_bytes(p, 16);
p += 16;
/* Random padding */
RAND_pseudo_bytes(p, padding);

ret = dtls1_write_bytes(s, TLS1_RT_HEARTBEAT, buf, 3 + payload + padding);

The reason that the code says that “payload and padding must not exceed 16381 bytes in total” is that the 16KByte (16384 byte) maximum heartbeat request size includes one byte to signal that this is a TLS1_HB_REQUEST, and two bytes to denote the length of the payload data in the request.

As the code stands, the OPENSSL_assert to verify that payload + padding <= 16381 is redundant, because the payload size is hard-wired to 18 bytes and the padding size to 16.

But the programmer has tried to do the right thing: put in the check anyway, in case someone changes those payload or padding sizes in the future without considering the consequences.

The code then transmits a heartbeat request consisting of:

  • The single byte 0x01 (denoting that this is a TLS1_HB_REQUEST).
  • Two bytes containing the 16-bit representation of 34 (size of payload plus padding).
  • Two bytes of payload consising of a 16-bit sequence number.
  • 16 bytes of random data making up the rest of the 18-byte payload.
  • 16 further random padding bytes, required by the standard.

Replying to heartbeat requests

When vulnerable versions of OpenSSL 1.0.1 respond to a heartbeat request, they aren’t quite so careful in processing the received data.

Heartbeat replies are supposed to contain a copy of the payload data from the request, as a way of verifying that the encrypted circuit is still working both ways.

It turns out that you can send a small heartbeat request, but sneakily set your payload length field to 0xFFFF (65535 bytes).

Then, OpenSSL will uncomplainingly copy 65535 bytes from your request packet, even though you didn’t send across that many bytes:

/* Allocate memory for the response, size is 1 byte
 * message type, plus 2 bytes payload length, plus
 * payload, plus padding
 */
buffer = OPENSSL_malloc(1 + 2 + payload + padding);
bp = buffer;

/* Enter response type, length and copy payload */
*bp++ = TLS1_HB_RESPONSE;
s2n(payload, bp);
memcpy(bp, pl, payload);
bp += payload;
/* Random padding */
RAND_pseudo_bytes(bp, padding);

r = dtls1_write_bytes(s, TLS1_RT_HEARTBEAT, buffer, 3 + payload + padding);

That means OpenSSL runs off the end of your data and scoops up whatever else is next to it in memory at the other end of the connection, for a potential data leakage of approximately 64KB each time you send a malformed heartbeat request.

This bug has been rather melodramatically named “heartbleed,” for reasons that should now be obvious.

According to the Finnish National Cyber Security Centre, the sort of data that “bleeds” when the bug is triggered varies enormously, but may includemessage contents, user credentials, session keys and even copies of a server’s own private keys.

That’s not good!

Fixing the problem

Fortunately, there’s a fix already: simply upgrade to OpenSSL 1.0.1g.

If you don’t want to or can’t do that, you can rebuild your current version of OpenSSL from source without TLS Heartbeat support, by adding -DOPENSSL_NO_HEARTBEATS at compile time.

Both of these immunise you from this flaw.

The new OpenSSL version includes a bounds check to make sure the payload length you specified isn’t longer that the data you actually sent:

/* Read type and payload length first */
if (1 + 2 + 16 > s->s3->rrec.length)
        return 0; /* silently discard */
hbtype = *p++;
n2s(p, payload);
if (1 + 2 + payload + 16 > s->s3->rrec.length)
        return 0; /* silently discard per RFC 6520 sec. 4 */

The -DOPENSSL_NO_HEARTBEATS compile-time option simply omits the buggy code altogether from vulnerable versions.

The lessons to learn

If you’re a programmer, remember: always double-check the data that the other side sent you before you use it to control the operation of your own code.

And remember: buffer overflows should always be treated as serious, even if they don’t lead to remote code execution.

Data leakage bugs can be just as disastrous, as this flaw demonstrates.

For further information…

If you’d like to know more about the main sorts of vulnerablity, from RCE (remote code execution) through EoP (elevation of privilege) to Information Disclosure, please listen to our Techknow podcastUnderstanding Vulnerabilities.

This post totally copied from “The Verge” and “SOPHOS“. This is just an informative post.

Semaphore and Critical section

Before understanding semaphore we should first discuss the critical section.
critical section is a piece of code that can be executed by two or more process at a time. Because of the simultaneous access of code our data might get inconsistent. To avoid this inconsistency we use synchronization methods.

so semaphore is one of the synchronization technique. It is a locking mechanism which is use to provide a lock for the access of critical section. If a process wants to access the critical section it has to acquire the lock first and free the lock once it has completed their work. When one process is already having the lock and other process try to acquire the lock then that process has to wait for the time till the lock is freed by previous process.

suppose we have total n number of same object and for that we have n number of lock. if a process try to acquire a lock and lock is available then the value of lock will be decreased by one or if lock is not available then that process has to wait till the time any lock is available. we can understand this by following example.

total number of objects = 3

total number of locks available =3

Process           Step                   Lock available                     Lock value        Status

 P1                acquire                       Yes                                    2                Acquired

 P2                acquire                       Yes                                    1                Acquired

 P3                acquire                       Yes                                    0                Acquired

 P4                acquire                       No                                     0                  Wait

 P2                release                       Yes                                    1                Released

 P4                acquire                      Yes                                     0                Acquired

Understanding Xenomai

Before understanding Xenomai it’s really important to understand the Normal Linux os and Real Time OS and how they execute their instructions.
Definition from Xenomai’s Website : Xenomai is a real-time development framework cooperating with the Linux kernel, in order to provide a pervasive, interface-agnostic, hard real-time support to user-space applications, seamlessly integrated into the GNU/Linux environment. Xenomai is based on an abstract RTOS core, usable for building any kind of real-time interfaces, over a nucleus which exports a set of generic RTOS services. Any number of RTOS personalities called “skins” can then be built over the nucleus, providing their own specific interface to the applications, by using the services of a single generic core to implement it. Xenomai runs over seven architectures (namely ppc, blackfin, arm, x86, x86_64, ia64 and ppc64), a variety of embedded and server platforms, and can be coupled to two major Linux kernel versions (2.4 and 2.6), for MMU-enabled and MMU-less systems. Supported real-time APIs include POSIX 1003.1b, VxWorks, pSOS+, VRTX and uITRON.

## Difference between RT os and Normal OS ##

– The Linux scheduler, like that of other OSes such as Windows or MacOS, is designed for best average response, so it feels fast and interactive even when running many programs. However, it doesn’t guarantee that any particular task will always run by a given deadline. A task may be suspended for an arbitrarily long time, for example while a Linux device driver services a disk interrupt.

– Scheduling guarantees are offered by real-time operating systems (RTOSes), such as QNX, LynxOS or VxWorks. RTOSes are typically used for control or communications applications, not for general purpose computing.

– The general idea of RT Linux is that a small real-time kernel runs beneath Linux, meaning that the real-time kernel has a higher priority than the Linux kernel. Real-time tasks are executed by the real-time kernel, and normal Linux programs are allowed to run when no real-time tasks have to be executed. Linux can be considered as the idle task of the real-time scheduler. When this idle task runs, it executes its own scheduler and schedules the normal Linux processes. Since the real-time kernel has a higher priority, a normal Linux process is preempted when a real-time task becomes ready to run and the real-time task is executed immediately.

How is the real-time kernel given higher priority than Linux kernel?

Basically, an operating system is driven by interrupts, which can be considered as the heartbeats of a computer:

1. All programs running in an OS are scheduled by a scheduler which is driven by timer interrupts of a clock to reschedule at certain times.
2. An executing program can block or voluntary give up the CPU in which case the scheduler is informed by means of a software interrupt (system call).
3. Hardware can generate interrupts to interrupt the normal scheduled work of the OS for fast handling of hardware.

RT Linux uses the flow of interrupts to give the real-time kernel a higher priority than the Linux kernel:

1. When an interrupt arrives, it is first given to the real-time kernel, and not to the Linux kernel. But interrupts are stored to give them later to Linux when the real-time kernel is done.
2. As first in row, the real-time kernel can run its real-time tasks driven by these interrupts.
3. Only when the real-time kernel is not running anything, the interrupts which were stored are passed on to the Linux kernel.
4. As second in row, Linux can schedule its own processes driven by these interrupt.

Hence, when a normal Linux program runs and a new interrupt arrives:

1. It is first handled by an interrupt handler set by the real-time kernel;
2. The code in the interrupt handler awakes a real-time task;
3. Immediately after the interrupt handler, the real-time scheduler is called ;
4. The real-time scheduler observes that another real-time task is ready to run, so it puts the Linux kernel to sleep, and awakes the real-time task.
Hence, to the real-time kernel and Linux kernel coexist on a single machine a special way of passing of the interrupts between real-time kernel and the Linux kernel is needed. Each flavor of RT Linux does this is in its own way. Xenomai uses an interrupt pipeline from the [Adeos project][1]. For more information, see also [Life with Adeos][2].
[1]: http://home.gna.org/adeos/
[2]: http://www.xenomai.org/documentation/xenomai-2.3/pdf/Life-with-Adeos-rev-B.pdf

Xenomai
———–

The Xenomai project was launched in August 2001.
Xenomai is based on an abstract RTOS core, usable for building any kind of real-time interfaces, over a nucleus which exports a set of generic RTOS services. Any number of RTOS personalities called “skins” can then be built over the nucleus, providing their own specific interface to the applications, by using the services of a single generic core to implement it.
The following skins on the generic core are implemented :
POSIX
pSOS+
VxWorks
VRTX
native: the Xenomai skin
uITRON
RTAI: only in kernel threads
Xenomai allows to run real-time threads either strictly in kernel space, or within the address space of a Linux process. A real-time task in user space still has the benefit of memory protection, but is scheduled by Xenomai directly, and no longer by the Linux kernel. The worst case scheduling latency of such kind of task is always near to the hardware limits and predictable, since Xenomai is not bound to synchronizing with the Linux kernel activity in such a context, and can preempt any regular Linux activity with no delay. Hence, he preferred execution environment for Xenomai applications is user space context.
But there might be a few cases where running some of the real-time code embodied into kernel modules is required, especially with legacy systems or very low-end platforms with under-performing MMU hardware. For this reason, Xenomai’s native API provides the same set of real-time services in a seamless manner to applications, regardless of their execution space. Additionally, some applications may need real-time activities in both spaces to cooperate, therefore special care has been taken to allow the latter to work on the exact same set of API objects.
In our terminology, the terms “thread” and “task” have the same meaning. When talking about a Xenomai task we refer to real-time task in user space, i.e., within the address space of a Linux process, not to be confused with regular Linux task/thread.