How to determine your machine is “Little Endian” or “Big Endian”.

What is big and Little Endian ?

Little and big endian are two ways of storing multibyte data-types ( int, float, etc). In little endian machines, last byte of binary representation of the multibyte data-type is stored first. On the other hand, in big endian machines, first byte of binary representation of the multibyte data-type is stored first.

Big Endian(Wikipedia)

Little Endian(Wikipedia)

Is there a quick way to determine endianness of your machine?
There are n no. of ways for determining endianness of your machine. Here is one quick way of doing the same.

#include <stdio.h>
int main()
{
   unsigned int i = 1;
   char *c = (char*)&i;
   if (*c)   
       printf("Little endian");
   else
       printf("Big endian");
   getchar();
   return 0;
}

In the above program, a character pointer c is pointing to an integer i. Since size of character is 1 byte when the character pointer is de-referenced it will contain only first byte of integer. If machine is little endian then *c will be 1 (because last byte is stored first) and if machine is big endian then *c will be 0.

Recovering from Broken Grub

On Friday, i was trying to down-grade Grub to grub-legacy. So installed grub-legacy, i knew i was playing with bootloader. When i restart my OS, as expected grub was not able to find out the OS. Problem became more worsen when i came to know, i didn’t installed stage1, stage1.5 and stage2 scripts means i didn’t ran commands(grub-mkconfig).

Dos grub didn’t had grub-install, 

Grub Error

So i googled didn’t found any solution. I read from different blog, websites and tried this.

  1. Use any live os and run grub-install
First Mount the partition where OS is installed. You can find the partition by running. 

#$ blkid
/dev/sda1: UUID="ee51f4e9-1ef8-4b65-8ef4-299600e8cbf4" TYPE="ext4" PTTYPE="dos" PARTUUID="c679c6ed-01" 

/dev/sda2: UUID="cb97ec88-4282-459a-852f-f619138d46d9" TYPE="ext4" PARTUUID="c679c6ed-02"

then run 
sudo mount /dev/sda1 /mnt 
(Make sure partition in write mode)
mount -o remount, rw /dev/sda2
(Here sdb3 where OS is installed)

grub-install --target=/mnt --recheck /dev/sda2

Now Scripts are installed reboot the machine.(Most probably you will get a grub black screen)

Now you have to do 3 things

a. Find the partitions. 

ls

it will show you how many partitions are here,  here you may get like

(hd0) (hd0,5) (hd0,1) (hd1) (hd1,1) (hd1,2) (fd0) (hd0,msdos1) (hd0, msdos2)

Then run

ls /(hd0,0)

and observe the output, if you are getting Linux root(where folders like etc, boot are present) then this is your root.

b. Set the root

root (hd0,0)

Here (hd0,0) Explained Here.

  • The brackets are a must; all devices listed in GRUB menu must be enclosed in brackets.
  • hd stands for hard disk; alternatively, fd stands for floppy disk, cd stands for CD-ROM etc.
  • The first number (integer for geeks) refers to the physical hard drive number; in this case, the first drive, as they are counted from zero up. For example, hd2 refers to the third physical hard drive.
  • The second number refers to the partition number of the selected hard drive; again, partitions are counted from zero up. In this case, 1 stands for the second partition.

From here, it is evident that GRUB (menu) does not discriminate between IDE or SCSI drives or primary or logical partitions. The task of deciding which hard drive or partition may boot is left to BIOS and Stage 1. As you see, the notation is very simple.

Primary partitions are marked from 0 to 3 (hd?,0), (hd?,1), (hd?,2), (hd?,3). Logical partitions in the extended partition are counted from 4 up, regardless of the actual number of primary partitions on the hard disk, e.g. (hd1,7).

For me I guessed, i tried like setting up the root, like above mentioned then. used grub’s ls command if ls /boot+tab shows any thing that partition where you have to install actually re-install your Grub. 

c. Load the kernel

kernel /boot/vmlinux-linux  ro root=/dev/sda2

d. Load the Linux img

initrd /boot/vmlinux-linux-lts.img

Then Run

boot

You will be able to boot the desired OS. [1]

Link Aggregation LAG(IEEE 802.3ad)

Yesterday my colleague asked me about LAG, whats the meaning of LAG and what’s the use of it?

What does Link Aggregation (LAG) mean?

Link aggregation (LAG) is used to describe various methods for using multiple parallel network connections to increase throughput beyond the limit that one link (one connection) can achieve. For link aggregation, physical ports must reside on a single switch. Split Multi-Link Trunking (SMLT) and Routed-SMLT (RSMLT) remove this limitation and physical ports are allowed to connect/split between two switches. This term is also known as Multi-Link Trunking (MLT), Link Bundling, Ethernet/Network/NIC Bonding or NIC teaming.

Link Aggregation (LAG) :

Link aggregation is a technique used in a high-speed-backbone network to enable the fast and inexpensive transmission of bulk data. The best feature of link aggregation is its ability to enhance or increase the network capacity while maintaining a fast transmission speed and not changing any hardware devices, thus reducing cost. Cost Effectiveness LAG is a very common technique for establishing a new network infrastructure using extra cabling above the current requirements. Labor cost is much more than the cost of cabling. Thus, when a network extension is required, the extra cables are used without incurring any additional labor. However, this can be done only when extra ports are available. Higher-Link Availability This is the best feature of LAG. A communication system keeps working even when a link fails. In such situations, link capacity is reduced but data flow is not interrupted. Network Backbone Formerly, there were many techniques used for networking, but IEEE standards are always preferred. LAG supports network load balancing. Different load balancing algorithms are set by network engineers or administrators. Furthermore, network speed is increased by small increments, saving both resources and cost. Limitations With all kinds of implementations, each link and piece of hardware is standardized and engineered to not affect the network efficiency or link speed. Additionally, with single-switching all kind of ports (802.3ad, broadcast, etc.) must reside on a single switch or the same logical switch.

How to setup LAG in linux BOX

Thanx to Techopedia

OpenSSL “HeartBleed” Bug

                          

Monday afternoon, the IT world got a very nasty wakeup call, an emergency security advisory from the OpenSSL project warning about an open bug called “Heartbleed.” The bug could be used to pull a chunk of working memory from any server running their current software. There was an emergency patch, but until it was installed, tens of millions of servers were exposed. Anyone running a server was suddenly in crisis mode.

If the “Heartbleed” name sounds dramatic, this bug seems to live up to the hype. It’s already far worse than the GoToFail bug that embarrassed Apple earlier this year, both by the scale of computers affected and the depth of the breach. The new bug would let attackers pull the private keys to the server, letting attackers listen in on data traffic and potentially masquerade as the server. Even worse, it’s old: the bug dates back two years, and it’s still unclear how long anyone’s known about it.

OpenSSL isn’t widely known outside of the coding world, but as many as two out of three servers on the web rely on its software. The sudden reveal means anyone involved is now scrambling for a fix. Already, Yahoo has been exposed by the bug, and experts have advised any Yahoo users to steer clear of their accounts until the company has time to update their servers. (A Yahoo representative tells The Verge the core sites are now patched, although the team is still working to implement the fix across the rest of the site.) Dozens of other smaller companies have also reportedly been affected, including Imgur, Flickr, and LastPass (although LastPass says no unencrypted data was exposed). “It is catastrophically bad, just a hugely damaging bug,” says ICSI security researcher Nicholas Weaver.

Discovered by Google researcher Neel Mehta, the bug allows an attacker to pull 64k at random from a given server’s working memory. It’s a bit like fishing — attackers don’t know what usable data will be in the haul — but since it can be performed over and over again, there’s the potential for a lot of sensitive data to be exposed. The server’s private encryption keys are a particular target, since they’re necessarily kept in working memory and are easily identifiable among the data. That would allow attackers to eavesdrop on traffic to and from the service, and potentially decrypt any past traffic that had been stored in encrypted form.

For most privacy tools relying on OpenSSL, the takeaway is catastrophic. A blog post from the Tor Project told users, “if you need strong anonymity or privacy on the internet, you might want to stay away from the internet entirely for the next few days while things settle.” In many cases, a few days may not be enough. It will give services time to patch their servers, but if any private keys were compromised before the patch went up, it would give attackers free rein in the months to come. Servers can reset their certificates, but it’s slow and expensive, and experts suspect many of them may simply assume the patch is enough. “I bet that there will be a lot of vulnerable servers a year from now,” Weaver says. “This won’t get fixed.”

Apple, Google and Microsoft appear to be unaffected, along with the major e-banking services. Yahoo, on the other hand, was affected and leaking user credentials for a significant portion of the day before its core sites were fixed. More generally, any server running OpenSSL on Apache or Nginx will be affected, which implicates a huge variety of everyday websites and services.

For now, there are a few ways users can tell which services are safe — but the news isn’t reassuring. This site, built by developer Filippo Valsorda, offers a spot-check as to which services are currently unpatched, but the site’s code is also producing false negatives, so it shouldn’t be taken as definitively ruling anything out. Any patched server will also need to generate new SSL certificates to make sure attackers can’t use keys that were exposed in the breach. To check, use an SSL tracker like this one and look for a certificate’s “issued on” date, which should be dated after the recent patch. Resetting the certificates will take time and money, but if a compromised site keeps using a compromised certificate, they’ll be leaving themselves open to an attack.

It’s still early to tell what larger changes will be made as a result of the breach, but some lessons are already clear. Despite the vast infrastructure relying on OpenSSL, the open-source project is comparatively underfunded, and some experts have already called for more donations to the project to prevent vulnerabilities like Heartbleed from slipping through the cracks. Perfect Forward Secrecy could also have limited the damage from the bug, preventing decryption after the fact.

But the most troubling lesson might be how hard vulnerabilities are to discover, and how damaging they can be once fully revealed. “These are really subtle bugs,” Weaver says. “You might detect it if you ran it through a memory checker, but this is not the kind of thing that just shows up looking at the code.” That’s a credit to Google, who was rigorous enough to discover the bug — but for anyone relying on secure software, it’s a troubling thought.

Some Comments about clarification of the posts.

  • The answer to this question is “maybe”. The article here has several points wrong, one of them being that the attacker cannot select which data they are going to have leak. The flaw basically blows out a memory register and returns whatever data happens to be in the extra bytes. It could be something, or, in most cases, it could be nothing. To get useful data, you would have to flood the server with TLS “heartbeat” requests.
  • As someone familiar with this tech, I’d say the author of this article did a decent job covering the main bases. The possibilities are huge. Getting a 64k chunk of random memory is pretty significant. There are just over 262,000 64k chunks in a 16GB server. An attacker can muster 262,000 requests in just a few minutes. Once the random jabs have shaken loose pretty much all the contents of the server’s memory, then the attacker can sift through the responses for the private SSL keys. The private SSL keys can be used by the attacker to imitate the host to visiting browsers, which will then submit login info. Having the server’s private SSL key can also greatly simplify a Man-in-the-middle attack on sessions that are supposed to be private between the browser and the server. Finally, the private SSL key also enables previously-captured network traffic to be fully decrypted.
  • The article is confusing about what the potential leakage is. There is the potential that an attacker could get any kind of data that was loaded into memory, which makes the bug nasty. But it’s not like an attacker could just hit the server and pull everything in memory, or download a specific kind of data. They would have to send millions of heartbeat requests just to guarantee that they got any useful data. And it still wouldn’t be guaranteed.I’m not trying to downplay the seriousness of the issue, but it’s probably not likely that people need to freak out about all their data being stolen right now. The serious concern is the loss of server private keys, so people should go through the process of updating their servers and re-keying their SSL certificates to ensure that they are safe.
  • No it doesn’t; it allows an attacker to see the contents of a small portion of RAM immediately following a chunk of RAM that OpenSSL uses. They have no control over what specific portion of memory it is nor what it contains; it could be anything and what it is exactly really depends on a bunch of pretty much random factors.The concern is that there is a possibility that it could contain something important (like the servers private encryption keys) which would obviously be a problem. This is where the verge article delves into sensationalist BS however they can’t write software that is essentially “push button to steal certs” this would take time and resources to exploit in any meaningful way unless the attacker was very lucky. Again, this is basically like trying to steal someone’s password by walking behind their desk hoping that they happen to be typing it in as you walk by.It’s a big deal because relying on someone not getting lucky is not a good form of security, the chances that this was exploited in a meaningful way is almost nil.

All technical knowledge about Open Ssl 1.0.1

An information disclosure vulnerability has been found, and promptly patched, in OpenSSL.

OpenSSL is a very widely used encryption library, responsible for putting the S in HTTPS, and the padlock in the address bar, for many websites.

The bug only exists in the OpenSSL 1.0.1 source code (from version 1.0.1 to 1.0.1f inclusive), because the faulty code relates to a fairly new feature known as the TLS Heartbeat Extension.

The heartbeat extension was first documented in RFC 6520 in February 2012.

TLS heartbeats are used as “keep alive” packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange.

Because the heartbeats consist of a reply and a matching response, they allow either end to confirm not only that the session is open, but also that end-to-end connectivity is working properly.

Sending heartbeat requests

The RFC 6520 standard explicitly restricts the maxium size of a heartbeat request to 214 bytes (16KBytes), but OpenSSL itself generates far shorter requests.

Don’t worry if you don’t understand C; but if you do, the OpenSSL heartbeat request code looks like this:

unsigned int payload = 18; /* Sequence number + random bytes */
unsigned int padding = 16; /* Use minimum padding */

/* Check if padding is too long, payload and padding
* must not exceed 2^14 - 3 = 16381 bytes in total.
*/

OPENSSL_assert(payload + padding <= 16381);

/* Create HeartBeat message, we just use a sequence number
 * as payload to distuingish different messages and add
 * some random stuff.
 *  - Message Type, 1 byte
 *  - Payload Length, 2 bytes (unsigned int)
 *  - Payload, the sequence number (2 bytes uint)
 *  - Payload, random bytes (16 bytes uint)
 *  - Padding
 */

buf = OPENSSL_malloc(1 + 2 + payload + padding);
p = buf;
/* Message Type */
*p++ = TLS1_HB_REQUEST;
/* Payload length (18 bytes here) */
s2n(payload, p);
/* Sequence number */
s2n(s->tlsext_hb_seq, p);
/* 16 random bytes */
RAND_pseudo_bytes(p, 16);
p += 16;
/* Random padding */
RAND_pseudo_bytes(p, padding);

ret = dtls1_write_bytes(s, TLS1_RT_HEARTBEAT, buf, 3 + payload + padding);

The reason that the code says that “payload and padding must not exceed 16381 bytes in total” is that the 16KByte (16384 byte) maximum heartbeat request size includes one byte to signal that this is a TLS1_HB_REQUEST, and two bytes to denote the length of the payload data in the request.

As the code stands, the OPENSSL_assert to verify that payload + padding <= 16381 is redundant, because the payload size is hard-wired to 18 bytes and the padding size to 16.

But the programmer has tried to do the right thing: put in the check anyway, in case someone changes those payload or padding sizes in the future without considering the consequences.

The code then transmits a heartbeat request consisting of:

  • The single byte 0x01 (denoting that this is a TLS1_HB_REQUEST).
  • Two bytes containing the 16-bit representation of 34 (size of payload plus padding).
  • Two bytes of payload consising of a 16-bit sequence number.
  • 16 bytes of random data making up the rest of the 18-byte payload.
  • 16 further random padding bytes, required by the standard.

Replying to heartbeat requests

When vulnerable versions of OpenSSL 1.0.1 respond to a heartbeat request, they aren’t quite so careful in processing the received data.

Heartbeat replies are supposed to contain a copy of the payload data from the request, as a way of verifying that the encrypted circuit is still working both ways.

It turns out that you can send a small heartbeat request, but sneakily set your payload length field to 0xFFFF (65535 bytes).

Then, OpenSSL will uncomplainingly copy 65535 bytes from your request packet, even though you didn’t send across that many bytes:

/* Allocate memory for the response, size is 1 byte
 * message type, plus 2 bytes payload length, plus
 * payload, plus padding
 */
buffer = OPENSSL_malloc(1 + 2 + payload + padding);
bp = buffer;

/* Enter response type, length and copy payload */
*bp++ = TLS1_HB_RESPONSE;
s2n(payload, bp);
memcpy(bp, pl, payload);
bp += payload;
/* Random padding */
RAND_pseudo_bytes(bp, padding);

r = dtls1_write_bytes(s, TLS1_RT_HEARTBEAT, buffer, 3 + payload + padding);

That means OpenSSL runs off the end of your data and scoops up whatever else is next to it in memory at the other end of the connection, for a potential data leakage of approximately 64KB each time you send a malformed heartbeat request.

This bug has been rather melodramatically named “heartbleed,” for reasons that should now be obvious.

According to the Finnish National Cyber Security Centre, the sort of data that “bleeds” when the bug is triggered varies enormously, but may includemessage contents, user credentials, session keys and even copies of a server’s own private keys.

That’s not good!

Fixing the problem

Fortunately, there’s a fix already: simply upgrade to OpenSSL 1.0.1g.

If you don’t want to or can’t do that, you can rebuild your current version of OpenSSL from source without TLS Heartbeat support, by adding -DOPENSSL_NO_HEARTBEATS at compile time.

Both of these immunise you from this flaw.

The new OpenSSL version includes a bounds check to make sure the payload length you specified isn’t longer that the data you actually sent:

/* Read type and payload length first */
if (1 + 2 + 16 > s->s3->rrec.length)
        return 0; /* silently discard */
hbtype = *p++;
n2s(p, payload);
if (1 + 2 + payload + 16 > s->s3->rrec.length)
        return 0; /* silently discard per RFC 6520 sec. 4 */

The -DOPENSSL_NO_HEARTBEATS compile-time option simply omits the buggy code altogether from vulnerable versions.

The lessons to learn

If you’re a programmer, remember: always double-check the data that the other side sent you before you use it to control the operation of your own code.

And remember: buffer overflows should always be treated as serious, even if they don’t lead to remote code execution.

Data leakage bugs can be just as disastrous, as this flaw demonstrates.

For further information…

If you’d like to know more about the main sorts of vulnerablity, from RCE (remote code execution) through EoP (elevation of privilege) to Information Disclosure, please listen to our Techknow podcastUnderstanding Vulnerabilities.

This post totally copied from “The Verge” and “SOPHOS“. This is just an informative post.

Semaphore and Critical section

Before understanding semaphore we should first discuss the critical section.
critical section is a piece of code that can be executed by two or more process at a time. Because of the simultaneous access of code our data might get inconsistent. To avoid this inconsistency we use synchronization methods.

so semaphore is one of the synchronization technique. It is a locking mechanism which is use to provide a lock for the access of critical section. If a process wants to access the critical section it has to acquire the lock first and free the lock once it has completed their work. When one process is already having the lock and other process try to acquire the lock then that process has to wait for the time till the lock is freed by previous process.

suppose we have total n number of same object and for that we have n number of lock. if a process try to acquire a lock and lock is available then the value of lock will be decreased by one or if lock is not available then that process has to wait till the time any lock is available. we can understand this by following example.

total number of objects = 3

total number of locks available =3

Process           Step                   Lock available                     Lock value        Status

 P1                acquire                       Yes                                    2                Acquired

 P2                acquire                       Yes                                    1                Acquired

 P3                acquire                       Yes                                    0                Acquired

 P4                acquire                       No                                     0                  Wait

 P2                release                       Yes                                    1                Released

 P4                acquire                      Yes                                     0                Acquired

Vmware Problem with ubuntu 13.10

I installed Vmware it was working well, i updated/upgraded OS firmware, now it’s not working. i googled it and found the solution. before telling the solution i wanna tell why this is coming.

VMware Player has a nice auto-detection of kernel changes, and requests the user to compile the required modules in order to load them. This happens from time to time after a regular update of your system. Usually, the dialog of VMware Kernel Module Updater pops up, asks for root access authentication, and completes the compilation.

In theory this is supposed to work flawlessly but in reality there are pitfalls occassionally. With the recent upgrade to Ubuntu 13.04 Raring Ringtail and the latest kernel 3.8.0-21 the actual VMware Kernel Module Updater simply disappeared and the application wouldn’t start as expected. When you launch VMware Player as super user (root) the dialog would stall.

Solution is :

sudo vmware-modconfig --console --install-all

This solution worked for me.

My machine was VMware Player 5.0.2 build-1031769

Bash script not working with `sh`

I was testing a simple script and I was wondering why it works fine when executed from directory: ./test.sh but when I try with “sh” command sh test.sh it’s not working:
test.sh: 3: test.sh: [[: not found
test.sh: 7: test.sh: [[: not found
Script:

#!/usr/bin/env bash

if [[ $1 = one ]]
        then
        printf "%b" "two\n" >&2
        exit 0
elif [[ $1 = two ]]
then
        printf "%b" "one\n" >&2
        exit 0
else
        printf "%b" "Specify argument: one/two\n"
        exit 1
fi

I goggled and found this link
Summary

sh is a different program than bash.

Detail

The problem is that the Bourne shell (sh) is not the Bourne Again shell (bash). Namely, sh doesn’t understand the [[ pragma. In fact, it doesn’t understand [ either. [ is an actual program or link to /bin/test (or /usr/bin/[, /usr/bin/test).

$ which [
/bin/[
$ ls -lh /bin/[
-r-xr-xr-x  2 root  wheel    42K Feb 29 17:11 /bin/[

When you execute your script directly through ./test.sh, you’re calling the script as the first argument to the program specified in the first line. In this case:
#!/usr/bin/env bash
Often, this is directly the interpreter (/bin/bash, or any number of other script interpreters), but in your case you’re using env to run a program in a modified environment — but that follow argument is still bash. Effectively, ./test.sh is bash test.sh.
Because sh and bash are different shells with different syntax interpretations, you’re seeing that error. If you run bash test.sh, you should see what is expected.
More info
Others have pointed out in comments that /bin/sh can be a link or other shell. Historically, sh was the Bourne shell on the old AT&T Unix, and in my mind the canonical descent. However, that is different in BSD variations and has diverged in other Unix based systems and distributions over time. If you’re really interested in the inner workings (including how /bin/sh and /bin/bash can be the same program and behave totally differently), read the following:
[1] [2]

As noted: /bin/sh typically (though not always) invokes a POSIX-compliant Bourne shell. Bash is Not Bourne.
Bash will attempt to emulate Bourne when invoked as ‘sh’ (as when /bin/sh symlinks or links to /bin/bash), or if $POSIXLY_CORRECT is defined in the invoking environment, when invoked with the –posix invocation option, or when ‘set -o posix’ has been executed. This enables testing a Bourne shell script / command for POSIX compliance.
Alternately, invoke scripts / test commands with a known POSIX-compliant shell. ‘dash’ is close, the Korn shell (ksh) IIRC offers a POSIX compliant option as well.

Date difference with Time in (Perl)

Function validateDateDifference() can validate date difference with time also.

Validation Range is from 1900-01-01 00:00:00 to 2199-12-31 23:59:59

Syntax

validateDateDifference(“Startdate time”,”Enddate time”,”dateformat”)

date format are : mmddyyyy or ddmmyyyy or yyyymmdd
(this function can support these time formats only)

Start date syntax : mm-dd-yyyy or dd-mm-yyyy or yyyy-mm-dd
you can replace “-” with “.” or “/”
for Ex. mm.dd.yyyy or mm/dd/yyyy
time syntax is : HH:MM:SS
EX. 19:10:59

Test cases :
validateDateDifference(“12.12.2013 12:12:58″,”12.12.2013 12:12:59″,”ddmmyyyy”)
validateDateDifference(“2013-01-31 12:12:58″,”2013-12-02 12:12:59″,”yyyymmdd”)

If you want to validate date only then you can use function validateDateFormat()

validateDateFormat(“date” , “dateformat”)

syntax same as above

If you want to validate time only then you can use function isvalidtime()

isvalidtime(“time”)

time syntax same as above


sub validateDateFormat{
$regexmmddyyyy='^(0[1-9]|1[012])[-/.](0[1-9]|[12][0-9]|3[01])[- /.]((?:19|20|21)\d\d)$';
$regexddmmyyyy='^(0[1-9]|[12][0-9]|3[01])[-/.](0[1-9]|1[012])[- /.]((?:19|20|21)\d\d)$';
$regexyyyymmdd='^((?:19|20|21)\d\d)[- /.](0[1-9]|1[012])[-/.](0[1-9]|[12][0-9]|3[01])$';
$date=shift;
$format = shift;
if($format eq 'ddmmyyyy'){
if(eval $date=~$regexddmmyyyy){
$date = $1;
$month = $2;
$year = $3;
if(isvaliddate($date,$month,$year) eq "true"){
return $year.$month.$date; # Valid date
}else{
return "false"; # Not a valid date
}
}
}elsif($format eq 'mmddyyyy'){
if(eval $date=~$regexmmddyyyy){
$date = $2;
$month = $1;
$year = $3;
if(isvaliddate($date,$month,$year) eq "true"){
return $year.$month.$date; # Valid date
}else{
return "false"; # Not a valid date
}
}
}elsif($format eq 'yyyymmdd'){
if(eval $date=~$regexyyyymmdd){
$date = $3;
$month = $2;
$year = $1;
if(isvaliddate($date,$month,$year) eq "true"){
return $year.$month.$date; # Valid date
}else{
return "false"; # Not a valid date
}
}
}else{
return "false"; # Not a valid date
}
}
## HH:MM:SS is allowed only
sub isvalidtime{
$time = shift;
$regexhhmmss='^(0[1-9]|1[0-9]|2[0-3])[:](0[1-9]|1[0-9]|2[0-9]|3[0-9]|4[0-9]|5[0-9])[:](0[1-9]|1[0-9]|2[0-9]|3[0-9]|4[0-9]|5[0-9])$';
if(eval $time=~$regexhhmmss){
return "true";
}else{
return "false";
}
}
sub isvaliddate{
$date = shift;
$month = shift;
$year = shift;
#$date
#$month
#$year
if ($date == 31 and ($month == 4 or $month == 6 or $month == 9 or $month == 11)) {
return "false"; # 31st of a month with 30 days
}elsif ($date >= 30 and $month == 2) {
return "false"; # February 30th or 31st
} elsif ($month == 2 and $date == 29 and not ($year % 4 == 0 and ($year % 100 != 0 or $year % 400 == 0))) {
return "false"; # February 29th outside a leap year
} else {
return "true"; # Valid date
}
}
sub validateDateDifference{
$startdate = shift;
$enddate = shift;
$format =shift;
# Trimming dates and time " 20-01-2013 12.58.1 " --> "20-01-2013 12.59.32"
$startdate =~ s/^\s+|\s+$//g;
$enddate =~ s/^\s+|\s+$//g;
$format=~ s/^\s+|\s+$//g;
($startdate, $startdate_time) = split(/ +/, $startdate);
($enddate, $enddate_time) = split(/ +/, $enddate);
$startdate = validateDateFormat($startdate,$format);
if($startdate eq "false"){
return "false"; # invalid date difference
}
$enddate = validateDateFormat($enddate,$format);
if($enddate eq "false"){
return "false"; # invalid date difference
}
#converting into YYYYMMDD format
if($enddate == $startdate ){

if(isvalidtime($startdate_time) eq "false"){
print "reached";
return "false";
}
if(isvalidtime($enddate_time) eq "false"){
print "reached";
return "false";
}
$startdate_time =~ s/://g;
$enddate_time =~ s/://g;
if($startdate_time < $enddate_time){ return "true"; # Valid date difference }else{ return "false"; # invalid date difference } } elsif($enddate > $startdate ){
return "true"; # Valid date difference
}else{
return "false"; # invalid date difference
}
}
print 'validateDateDifference("12.12.2013 12:12:58","12.12.2013 12:12:59","ddmmyyyy")\t'.validateDateDifference("12.12.2013 12:12:58","12.12.2013 12:12:59","ddmmyyyy")."\n";
print 'validateDateDifference("12.12.2014 12:12:58","12.12.2014 12:12:12","ddmmyyyy")\t'.validateDateDifference("12.12.2014 12:12:58","12.12.2014 12:12:12","ddmmyyyy")."\n";
print 'validateDateDifference("12.12.2013 12:12:58","12.12.2013 12:12:59","mmddyyyy")\t'.validateDateDifference("12.12.2013 12:12:58","12.12.2013 12:12:59","mmddyyyy")."\n";
print 'validateDateDifference("12.12.2013 12:12:58","12.12.2013 12:12:59","mmmmyyyy")\t'.validateDateDifference("12.12.2013 12:12:58","12.12.2013 12:12:59","mmmmyyyy")."\n";
print 'validateDateDifference("2013-01-31 12:12:58","2013-12-02 12:12:59","yyyymmdd")\t'.validateDateDifference("2013-01-31 12:12:58","2013-12-02 12:12:59","yyyymmdd")."\n";
print 'validateDateDifference("2013-01-31 12:12:58","2013-12-02 12:12:59","yyyymmdd")\t'.validateDateDifference("2013-01-31 12:12:58","2013-12-02 12:12:59","yyyymmdd")."\n";

connecting to archive.ubuntu.com takes too long

I have a Ubuntu 13.04 that I just installed fresh. Now if I try to do anything with apt-get, it tries to connect to archive.ubuntu.com .. It stays at [Connecting to archive.ubuntu.com (2001:67c:1360:8c01::1a)] phase for like 2 minutes, after which it actually starts to communicate and download stuff …

Eventually it always connects, but in waits at the [Connecting to archive.ubuntu.com (2001:67c:1360:8c01::1a)] phase everytime for like 2 minutes !

I didn’t have this problem previously on Ubuntu 13.04, right after reinstalling the OS ..

Solution:

I figured out the problem. Posted below as an answer!

Running the following command in Terminal tells if IPv6 is enabled or not:

cat /proc/sys/net/ipv6/conf/all/disable_ipv6

0 means its enabled, while 1 means its disabled.

To disable IPv6 from within Terminal, enter the following and reboot:

echo "#disable ipv6" | sudo tee -a /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf

After boot, re-run the first command, and it should be 1 now, after running this run

sudo sysctl -p

all output should be 1, now ipv6 is disable here.

Understanding “extern” keyword in C

Hello , one more important stuff to understand

I’m sure that this post will be as interesting and informative to C virgins (i.e. beginners) as it will be to those who are well versed in C. So let me start with saying that extern keyword applies to C variables (data objects) and C functions. Basically extern keyword extends the visibility of the C variables and C functions. Probably that’s is the reason why it was named as extern.

Though (almost) everyone knows the meaning of declaration and definition of a variable/function yet for the sake of completeness of this post, I would like to clarify them. Declaration of a variable/function simply declares that the variable/function exists somewhere in the program but the memory is not allocated for them. But the declaration of a variable/function serves an important role. And that is the type of the variable/function. Therefore, when a variable is declared, the program knows the data type of that variable. In case of function declaration, the program knows what are the arguments to that functions, their data types, the order of arguments and the return type of the function. So that’s all about declaration. Coming to the definition, when we define a variable/function, apart from the role of declaration, it also allocates memory for that variable/function. Therefore, we can think of definition as a super set of declaration. (or declaration as a subset of definition). From this explanation, it should be obvious that a variable/function can be declared any number of times but it can be defined only once. (Remember the basic principle that you can’t have two locations of the same variable/function). So that’s all about declaration and definition.
Now coming back to our main objective: Understanding “extern” keyword in C. I’ve explained the role of declaration/definition because it’s mandatory to understand them to understand the “extern” keyword. Let us first take the easy case. Use of extern with C functions. By default, the declaration and definition of a C function have “extern” prepended with them. It means even though we don’t use extern with the declaration/definition of C functions, it is present there. For example, when we write.

    int foo(int arg1, char arg2);

There’s an extern present in the beginning which is hidden and the compiler treats it as below.

    extern int foo(int arg1, char arg2);

Same is the case with the definition of a C function (Definition of a C function means writing the body of the function). Therefore whenever we define a C function, an extern is present there in the beginning of the function definition. Since the declaration can be done any number of times and definition can be done only once, we can notice that declaration of a function can be added in several C/H files or in a single C/H file several times. But we notice the actual definition of the function only once (i.e. in one file only). And as the extern extends the visibility to the whole program, the functions can be used (called) anywhere in any of the files of the whole program provided the declaration of the function is known. (By knowing the declaration of the function, C compiler knows that the definition of the function exists and it goes ahead to compile the program). So that’s all about extern with C functions.
Now let us the take the second and final case i.e. use of extern with C variables. I feel that it more interesting and information than the previous case where extern is present by default with C functions. So let me ask the question, how would you declare a C variable without defining it? Many of you would see it trivial but it’s important question to understand extern with C variables. The answer goes as follows.

    extern int var;

Here, an integer type variable called var has been declared (remember no definition i.e. no memory allocation for var so far). And we can do this declaration as many times as needed. (remember that declaration can be done any number of times) So far so good. :)
Now how would you define a variable. Now I agree that it is the most trivial question in programming and the answer is as follows.

    int var;

Here, an integer type variable called var has been declared as well as defined. (remember that definition is the super set of declaration). Here the memory for var is also allocated. Now here comes the surprise, when we declared/defined a C function, we saw that an extern was present by default. While defining a function, we can prepend it with extern without any issues. But it is not the case with C variables. If we put the presence of extern in variable as default then the memory for them will not be allocated ever, they will be declared only. Therefore, we put extern explicitly for C variables when we want to declare them without defining them. Also, as the extern extends the visibility to the whole program, by externing a variable we can use the variables anywhere in the program provided we know the declaration of them and the variable is defined somewhere.
Now let us try to understand extern with examples.
Example 1:

int var;
int main(void)
{
   var = 10;
   return 0;
}

Analysis: This program is compiled successfully. Here var is defined (and declared implicitly) globally.
Example 2:

extern int var;
int main(void)
{
  return 0;
}

Analysis: This program is compiled successfully. Here var is declared only. Notice var is never used so no problems.
Example 3:

extern int var;
int main(void)
{
 var = 10;
 return 0;
}

Analysis: This program throws error in compilation. Because var is declared but not defined anywhere. Essentially, the var isn’t allocated any memory. And the program is trying to change the value to 10 of a variable that doesn’t exist at all.
Example 4:

#include "somefile.h"
extern int var;
int main(void)
{
 var = 10;
 return 0;
}

Analysis: Supposing that somefile.h has the definition of var. This program will be compiled successfully.
Example 5:

extern int var = 0;
int main(void)
{
 var = 10;
 return 0;
}

Analysis: Guess this program will work? Well, here comes another surprise from C standards. They say that..if a variable is only declared and an initializer is also provided with that declaration, then the memory for that variable will be allocated i.e. that variable will be considered as defined. Therefore, as per the C standard, this program will compile successfully and work.
So that was a preliminary look at “extern” keyword in C.
I’m sure that you want to have some take away from the reading of this post. And I would not disappoint you. :)
In short, we can say
1. Declaration can be done any number of times but definition only once.
2. “extern” keyword is used to extend the visibility of variables/functions().
3. Since functions are visible through out the program by default. The use of extern is not needed in function declaration/definition. Its use is redundant.
4. When extern is used with a variable, it’s only declared not defined.
5. As an exception, when an extern variable is declared with initialization, it is taken as definition of the variable as well.

Copied from GeekforGeeks Thanx GeekforGeeks.