How to determine your machine is “Little Endian” or “Big Endian”.

What is big and Little Endian ?

Little and big endian are two ways of storing multibyte data-types ( int, float, etc). In little endian machines, last byte of binary representation of the multibyte data-type is stored first. On the other hand, in big endian machines, first byte of binary representation of the multibyte data-type is stored first.

Big Endian(Wikipedia)

Little Endian(Wikipedia)

Is there a quick way to determine endianness of your machine?
There are n no. of ways for determining endianness of your machine. Here is one quick way of doing the same.

#include <stdio.h>
int main()
{
   unsigned int i = 1;
   char *c = (char*)&i;
   if (*c)   
       printf("Little endian");
   else
       printf("Big endian");
   getchar();
   return 0;
}

In the above program, a character pointer c is pointing to an integer i. Since size of character is 1 byte when the character pointer is de-referenced it will contain only first byte of integer. If machine is little endian then *c will be 1 (because last byte is stored first) and if machine is big endian then *c will be 0.

How to get a “codesigned” gdb on OS X?

Very interesting problem,  I wanted to run gdb on my mac but i was not able to to run it.  Because it was not codeSigned . Here’s the solution.

The Darwin Kernel requires the debugger to have special permissions before it is allowed to control other processes. These permissions are granted by codesigning the GDB executable. Without these permissions, the debugger will report error messages such as:

Starting program: /x/y/foo
Unable to find Mach task port for process-id 28885: (os/kern) failure (0x5).
 (please check gdb is codesigned - see taskgated(8))

Codesigning requires a certificate. The following procedure explains how to create one:

(Note ) : I tried many times creating certificate for gdb, basic problem was while creating certificate, Please Create certificate for “System” not for “login” that is main problem. 

  • Start the Keychain Access application (in /Applications/Utilities/Keychain Access.app)
  • Select the Keychain Access -> Certificate Assistant -> Create a Certificate… menu
  • Then:
    • Choose a name for the new certificate (this procedure will use “gdb-cert” as an example)
    • Set “Identity Type” to “Self Signed Root”
    • Set “Certificate Type” to “Code Signing”
    • Activate the “Let me override defaults” option
  • Click several times on “Continue” until the “Specify a Location For The Certificate” screen appears, then set “Keychain” to “System”
  • Click on “Continue” until the certificate is created
  • Finally, in the view, double-click on the new certificate, and set “When using this certificate” to “Always Trust”
  • Exit the Keychain Access application and restart the computer (this is unfortunately required)

Once a certificate has been created, the debugger can be codesigned as follow. In a Terminal, run the following command…

codesign -f -s  "gdb-cert"  <gnat_install_prefix>/bin/gdb

… where “gdb-cert” should be replaced by the actual certificate name chosen above, and should be replaced by the location where you installed GNAT.

Restoring lost commits in git

Hey i am writing post after a long time.

I was working in git and committed some changes, i forgot to pushed to branch and i forgot the commit too. I reseted the HEAD :(.

So, you just did a git reset --hard HEAD^ and threw out your last commit. Well, it turns out you really did need those changes. . Don’t fear, git should still have your commit. When you do a reset, the commit you threw out goes to a “dangling” state. It’s still in git’s datastore, waiting for the next garbage collection to clean it up. So unless you’ve ran a git gc since you tossed it, you should be in the clear to restore it.

$ git show-ref -h HEAD
  7c61179cbe51c050c5520b4399f7b14eec943754 HEAD

$ git reset --hard HEAD^
  HEAD is now at 39ba87b Fixing about and submit pages so they don't look stupid

$ git show-ref -h HEAD
  39ba87bf28b5bb223feffafb59638f6f46908cac HEAD

So our HEAD has been backed up by one commit. At this point if we wanted it back we could just git pull, but we’re assuming that only our local repository knows about the commit. We need the SHA1 of the commit so we can bring it back. We can prove that git knows about the commit still with the fsck command:

$ git fsck --lost-found
  [... some blobs omitted ...]
  dangling commit 7c61179cbe51c050c5520b4399f7b14eec943754

You can also see the that git knows about the commit still by using the reflogcommand:

$ git reflog
  39ba87b... HEAD@{0}: HEAD~1: updating HEAD
  7c61179... HEAD@{1}: pull origin master: Fast forward
  [... lots of other refs ...]

So, we now have our SHA1: 7c61179. If we want to get immediately apply it back onto our current branch, doing a git merge will recover the commit:

$ git merge 7c61179
  Updating 39ba87b..7c61179
  Fast forward
    css/screen.css |    4 ++++
    submit.html    |    4 ++--
    2 files changed, 6 insertions(+), 2 deletions(-)

This command will bring your lost changes back and make sure that HEAD is pointing at the commit. From here you can continue to work as normal! You could also checkout the SHA1 into a new branch, but really a merge is the fastest and easiest way to restore that lost commit once you have the hash. If you have other ways let us know in the comments!

Thanx to gitReady for this valuable post.

Recovering from Broken Grub

On Friday, i was trying to down-grade Grub to grub-legacy. So installed grub-legacy, i knew i was playing with bootloader. When i restart my OS, as expected grub was not able to find out the OS. Problem became more worsen when i came to know, i didn’t installed stage1, stage1.5 and stage2 scripts means i didn’t ran commands(grub-mkconfig).

Dos grub didn’t had grub-install, 

Grub Error

So i googled didn’t found any solution. I read from different blog, websites and tried this.

  1. Use any live os and run grub-install
First Mount the partition where OS is installed. You can find the partition by running. 

#$ blkid
/dev/sda1: UUID="ee51f4e9-1ef8-4b65-8ef4-299600e8cbf4" TYPE="ext4" PTTYPE="dos" PARTUUID="c679c6ed-01" 

/dev/sda2: UUID="cb97ec88-4282-459a-852f-f619138d46d9" TYPE="ext4" PARTUUID="c679c6ed-02"

then run 
sudo mount /dev/sda1 /mnt 
(Make sure partition in write mode)
mount -o remount, rw /dev/sda2
(Here sdb3 where OS is installed)

grub-install --target=/mnt --recheck /dev/sda2

Now Scripts are installed reboot the machine.(Most probably you will get a grub black screen)

Now you have to do 3 things

a. Find the partitions. 

ls

it will show you how many partitions are here,  here you may get like

(hd0) (hd0,5) (hd0,1) (hd1) (hd1,1) (hd1,2) (fd0) (hd0,msdos1) (hd0, msdos2)

Then run

ls /(hd0,0)

and observe the output, if you are getting Linux root(where folders like etc, boot are present) then this is your root.

b. Set the root

root (hd0,0)

Here (hd0,0) Explained Here.

  • The brackets are a must; all devices listed in GRUB menu must be enclosed in brackets.
  • hd stands for hard disk; alternatively, fd stands for floppy disk, cd stands for CD-ROM etc.
  • The first number (integer for geeks) refers to the physical hard drive number; in this case, the first drive, as they are counted from zero up. For example, hd2 refers to the third physical hard drive.
  • The second number refers to the partition number of the selected hard drive; again, partitions are counted from zero up. In this case, 1 stands for the second partition.

From here, it is evident that GRUB (menu) does not discriminate between IDE or SCSI drives or primary or logical partitions. The task of deciding which hard drive or partition may boot is left to BIOS and Stage 1. As you see, the notation is very simple.

Primary partitions are marked from 0 to 3 (hd?,0), (hd?,1), (hd?,2), (hd?,3). Logical partitions in the extended partition are counted from 4 up, regardless of the actual number of primary partitions on the hard disk, e.g. (hd1,7).

For me I guessed, i tried like setting up the root, like above mentioned then. used grub’s ls command if ls /boot+tab shows any thing that partition where you have to install actually re-install your Grub. 

c. Load the kernel

kernel /boot/vmlinux-linux  ro root=/dev/sda2

d. Load the Linux img

initrd /boot/vmlinux-linux-lts.img

Then Run

boot

You will be able to boot the desired OS. [1]

Link Aggregation LAG(IEEE 802.3ad)

Yesterday my colleague asked me about LAG, whats the meaning of LAG and what’s the use of it?

What does Link Aggregation (LAG) mean?

Link aggregation (LAG) is used to describe various methods for using multiple parallel network connections to increase throughput beyond the limit that one link (one connection) can achieve. For link aggregation, physical ports must reside on a single switch. Split Multi-Link Trunking (SMLT) and Routed-SMLT (RSMLT) remove this limitation and physical ports are allowed to connect/split between two switches. This term is also known as Multi-Link Trunking (MLT), Link Bundling, Ethernet/Network/NIC Bonding or NIC teaming.

Link Aggregation (LAG) :

Link aggregation is a technique used in a high-speed-backbone network to enable the fast and inexpensive transmission of bulk data. The best feature of link aggregation is its ability to enhance or increase the network capacity while maintaining a fast transmission speed and not changing any hardware devices, thus reducing cost. Cost Effectiveness LAG is a very common technique for establishing a new network infrastructure using extra cabling above the current requirements. Labor cost is much more than the cost of cabling. Thus, when a network extension is required, the extra cables are used without incurring any additional labor. However, this can be done only when extra ports are available. Higher-Link Availability This is the best feature of LAG. A communication system keeps working even when a link fails. In such situations, link capacity is reduced but data flow is not interrupted. Network Backbone Formerly, there were many techniques used for networking, but IEEE standards are always preferred. LAG supports network load balancing. Different load balancing algorithms are set by network engineers or administrators. Furthermore, network speed is increased by small increments, saving both resources and cost. Limitations With all kinds of implementations, each link and piece of hardware is standardized and engineered to not affect the network efficiency or link speed. Additionally, with single-switching all kind of ports (802.3ad, broadcast, etc.) must reside on a single switch or the same logical switch.

How to setup LAG in linux BOX

Thanx to Techopedia

Seconds Since the “Epoch”

I was supposed to write a RT (Real Time) logging which doesn’t call a single Linux CALL.
I had only seconds from 1st Jan 1970 (Called Eposh).

A value that approximates the number of seconds that have elapsed since the Epoch. A Coordinated Universal Time name (specified in terms of seconds (tm_sec), minutes (tm_min), hours (tm_hour), days since January 1 of the year (tm_yday), and calendar year minus 1900 (tm_year)) is related to a time represented as seconds since the Epoch, according to the expression below.

If the year is <1970 or the value is negative, the relationship is undefined. If the year is>=1970 and the value is non-negative, the value is related to a Coordinated Universal Time name according to the C-language expression, where tm_sec,  tm_min,  tm_hour,  tm_yday,  and  tm_year are all integer types:

tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 +
    (tm_year-70)*31536000 + ((tm_year-69)/4)*86400 -
    ((tm_year-1)/100)*86400 + ((tm_year+299)/400)*86400

The relationship between the actual time of day and the current value for seconds since the Epoch is unspecified.

How any changes to the value of seconds since the Epoch are made to align to a desired relationship with the current actual time is implementation-defined. As represented in seconds since the Epoch, each and every day shall be accounted for by exactly 86400 seconds.

Note:
The last three terms of the expression add in a day for each year that follows a leap year starting with the first leap year since the Epoch. The first term adds a day every 4 years starting in 1973, the second subtracts a day back out every 100 years starting in 2001, and the third adds a day back in every 400 years starting in 2001. The divisions in the formula are integer divisions; that is, the remainder is discarded leaving only the integer quotient.

You can convert epoch Seconds to current time please look at this LINK.

Xenomai Timer

Xenomai Timer :Xenomai has two time sources: the sytem timer, which counts the number of nanoseconds since 1970, and a hardware dependent high resolution counter which counts the time since an unspecified point in time (usually the system boot time). This hardware dependent high resolution counter is called “tsc” on a PC, and gave its name to Xenomai native API calls.rt_timer_tsc returns the value of this hardware dependent high-resolution counter.
rt_timer_info returns the same thing in the tsc member of the RT_TIMER_INFO structure, and the value of the system timer at exactly the same time as when the high-resolution counter was read.

This allows to have a correspondence between the two time sources.

rt_alarm_inquire is not related to this and returns some information
about a given alarm. Now, if you allow me, a little advice for the implementation of a “timer library”: you could be tempted to create only one periodic alarm object with Xenomai, and to manage a timer list yourself. Don’t do this. Creating an alarm object for each timer library object make Xenomai aware of the existence of all your application timers, this has several

advantages:
– it gives you information about all your timers in /proc/xenomai
– it allows Xenomai to use its anticipation algorithm for all your timers
– if you are concerned about the scalability of Xenomai timers list
management, you can check the options in the “Scalability” menu of
Xenomai configuration menu (“Real-time subsystem” sub-menu of kernel
configuration menu).
more about timers

Xenomai POSIX skin supports two clocks:
CLOCK_REALTIME maps to the nucleus system clock, keeping time as the amount of time since the Epoch, with a resolution of one system clock tick.

CLOCK_MONOTONIC maps to an architecture-dependent high resolution counter, so is suitable for measuring short time intervals. However, when used for sleeping (with clock_nanosleep()), the CLOCK_MONOTONIC clock has a resolution of one system clock tick, like the CLOCK_REALTIME clock.[1]

Semaphore and Critical section

Before understanding semaphore we should first discuss the critical section.
critical section is a piece of code that can be executed by two or more process at a time. Because of the simultaneous access of code our data might get inconsistent. To avoid this inconsistency we use synchronization methods.

so semaphore is one of the synchronization technique. It is a locking mechanism which is use to provide a lock for the access of critical section. If a process wants to access the critical section it has to acquire the lock first and free the lock once it has completed their work. When one process is already having the lock and other process try to acquire the lock then that process has to wait for the time till the lock is freed by previous process.

suppose we have total n number of same object and for that we have n number of lock. if a process try to acquire a lock and lock is available then the value of lock will be decreased by one or if lock is not available then that process has to wait till the time any lock is available. we can understand this by following example.

total number of objects = 3

total number of locks available =3

Process           Step                   Lock available                     Lock value        Status

 P1                acquire                       Yes                                    2                Acquired

 P2                acquire                       Yes                                    1                Acquired

 P3                acquire                       Yes                                    0                Acquired

 P4                acquire                       No                                     0                  Wait

 P2                release                       Yes                                    1                Released

 P4                acquire                      Yes                                     0                Acquired

Understanding Xenomai

Before understanding Xenomai it’s really important to understand the Normal Linux os and Real Time OS and how they execute their instructions.
Definition from Xenomai’s Website : Xenomai is a real-time development framework cooperating with the Linux kernel, in order to provide a pervasive, interface-agnostic, hard real-time support to user-space applications, seamlessly integrated into the GNU/Linux environment. Xenomai is based on an abstract RTOS core, usable for building any kind of real-time interfaces, over a nucleus which exports a set of generic RTOS services. Any number of RTOS personalities called “skins” can then be built over the nucleus, providing their own specific interface to the applications, by using the services of a single generic core to implement it. Xenomai runs over seven architectures (namely ppc, blackfin, arm, x86, x86_64, ia64 and ppc64), a variety of embedded and server platforms, and can be coupled to two major Linux kernel versions (2.4 and 2.6), for MMU-enabled and MMU-less systems. Supported real-time APIs include POSIX 1003.1b, VxWorks, pSOS+, VRTX and uITRON.

## Difference between RT os and Normal OS ##

- The Linux scheduler, like that of other OSes such as Windows or MacOS, is designed for best average response, so it feels fast and interactive even when running many programs. However, it doesn’t guarantee that any particular task will always run by a given deadline. A task may be suspended for an arbitrarily long time, for example while a Linux device driver services a disk interrupt.

- Scheduling guarantees are offered by real-time operating systems (RTOSes), such as QNX, LynxOS or VxWorks. RTOSes are typically used for control or communications applications, not for general purpose computing.

- The general idea of RT Linux is that a small real-time kernel runs beneath Linux, meaning that the real-time kernel has a higher priority than the Linux kernel. Real-time tasks are executed by the real-time kernel, and normal Linux programs are allowed to run when no real-time tasks have to be executed. Linux can be considered as the idle task of the real-time scheduler. When this idle task runs, it executes its own scheduler and schedules the normal Linux processes. Since the real-time kernel has a higher priority, a normal Linux process is preempted when a real-time task becomes ready to run and the real-time task is executed immediately.

How is the real-time kernel given higher priority than Linux kernel?

Basically, an operating system is driven by interrupts, which can be considered as the heartbeats of a computer:

1. All programs running in an OS are scheduled by a scheduler which is driven by timer interrupts of a clock to reschedule at certain times.
2. An executing program can block or voluntary give up the CPU in which case the scheduler is informed by means of a software interrupt (system call).
3. Hardware can generate interrupts to interrupt the normal scheduled work of the OS for fast handling of hardware.

RT Linux uses the flow of interrupts to give the real-time kernel a higher priority than the Linux kernel:

1. When an interrupt arrives, it is first given to the real-time kernel, and not to the Linux kernel. But interrupts are stored to give them later to Linux when the real-time kernel is done.
2. As first in row, the real-time kernel can run its real-time tasks driven by these interrupts.
3. Only when the real-time kernel is not running anything, the interrupts which were stored are passed on to the Linux kernel.
4. As second in row, Linux can schedule its own processes driven by these interrupt.

Hence, when a normal Linux program runs and a new interrupt arrives:

1. It is first handled by an interrupt handler set by the real-time kernel;
2. The code in the interrupt handler awakes a real-time task;
3. Immediately after the interrupt handler, the real-time scheduler is called ;
4. The real-time scheduler observes that another real-time task is ready to run, so it puts the Linux kernel to sleep, and awakes the real-time task.
Hence, to the real-time kernel and Linux kernel coexist on a single machine a special way of passing of the interrupts between real-time kernel and the Linux kernel is needed. Each flavor of RT Linux does this is in its own way. Xenomai uses an interrupt pipeline from the [Adeos project][1]. For more information, see also [Life with Adeos][2].
[1]: http://home.gna.org/adeos/
[2]: http://www.xenomai.org/documentation/xenomai-2.3/pdf/Life-with-Adeos-rev-B.pdf

Xenomai
———–

The Xenomai project was launched in August 2001.
Xenomai is based on an abstract RTOS core, usable for building any kind of real-time interfaces, over a nucleus which exports a set of generic RTOS services. Any number of RTOS personalities called “skins” can then be built over the nucleus, providing their own specific interface to the applications, by using the services of a single generic core to implement it.
The following skins on the generic core are implemented :
POSIX
pSOS+
VxWorks
VRTX
native: the Xenomai skin
uITRON
RTAI: only in kernel threads
Xenomai allows to run real-time threads either strictly in kernel space, or within the address space of a Linux process. A real-time task in user space still has the benefit of memory protection, but is scheduled by Xenomai directly, and no longer by the Linux kernel. The worst case scheduling latency of such kind of task is always near to the hardware limits and predictable, since Xenomai is not bound to synchronizing with the Linux kernel activity in such a context, and can preempt any regular Linux activity with no delay. Hence, he preferred execution environment for Xenomai applications is user space context.
But there might be a few cases where running some of the real-time code embodied into kernel modules is required, especially with legacy systems or very low-end platforms with under-performing MMU hardware. For this reason, Xenomai’s native API provides the same set of real-time services in a seamless manner to applications, regardless of their execution space. Additionally, some applications may need real-time activities in both spaces to cooperate, therefore special care has been taken to allow the latter to work on the exact same set of API objects.
In our terminology, the terms “thread” and “task” have the same meaning. When talking about a Xenomai task we refer to real-time task in user space, i.e., within the address space of a Linux process, not to be confused with regular Linux task/thread.

Regex to validate the Email

Here i am sharing a regex to validate the Email.

This is an standard version of regex.

^([a-zA-Z0-9\!\#\$\%\&\'\*\+\/\=\?\^\_\`\{\|\}\~\-]+)(?:\.[A-Za-z0-9\!\#\$\%\&\'\*\+\/\=\?\^\_\`\{\|\}\~\-]+)*@([a-zA-Z0-9]([\-]?[a-zA-Z0-9]+)*\.)+([a-zA-Z0-9]{0,6})\$

It can validate email like

Valid Emails

arungupta@gmail.com
arun+gupta+ramjiki+@gmail.com
a.little.lengthy.but.fine@dept.example.com
disposable.style.email.with+symbol@example.com
other.email-with-dash@example.com
arun@daiict.ac.in
arun_gupta@gmail.com

Invalid Emails

me@
@example.com
me.@example.com
.me@example.com
me@example..com
me.example@com
me\@example.com