30 January 2011

News: 30-Jan-2010

This weekend I've started the movement part. I was searching for a cheap VPS somewhere and i remembered that SimpliQ, a company where i worked couple of years ago, was selling lots of them. From their website i found intovps.com (not enjoyvps.ro - which i didn't test yet). I must say that i'm impressed of a Romanian company handling customers so well.

Until now i didn't had any support email that was not answered in less than 15 minutes or so. You know that i'm writing this at 23:30 (GMT+1), in Romania now is 00:30 (GMT+2) and even so the replies comes in couple of minutes... I just asked if i can upgrade only my bandwidth and it seems that i can't... anyway, let me explain why i like them so much...

There was a flood once in the data center (i have the VPS for one week now), they redirected the traffic or whatever they did, but in about one hour everything was working perfect.

I needed the TUN/TAP kernel module in order to make encrypted and compressed tunnels over Ethernet (i played with it a lot this weekend, so if you have any questions, i'm here). I didn't read the manual, but after i pressed one button from the intovps panel and waited couple of seconds (one reboot required) my VPS was able to create tunnels.

I needed some kernel modules for my applications (did i mention that I've installed Quagga with OSPF module between all my servers and configured tunnels and backup routes this weekend?), but they refused very polite to install that module because it was a security issue (after reading some more documentation, it seems that I've asked for permission to tcpdump the master Ethernet card - well i didn't know... honestly :) ).

Then i asked for other module (did i mention that i installed 2 remote servers with 3TB storage, each, were i will keep all the pictures?) that will allow me to mount remote filesystems (sshfs in my case). In couple of minutes i had it installed.

Just to make a difference between GoDaddy and IntoVPS... GoDaddy sucks - big time!

I asked GoDaddy to install the TUN/TAP module... they asked me to upgrade to dedicated server.
I asked GoDaddy to explain to me, why was my VPS restarted without my authorization or why i wasn't informed after that... they told me that this kind of things just happen from time to time and that if i don't want them, i should upgrade to dedicated server... but even so:

If you need to ensure that your content is run on a server that is not normally rebooted unless rebooted by yourself then going to a fully dedicated server is recommended. There is still a chance for any network related issues or facility issues that can cause the requirement for a power cycle on a fully dedicated server but these are much less likely to happen.

I will let you know how IntoVPS is in the future... until then i'll be alwayshere.net.

29 January 2011

MySQL - grant slave permitions

grant replication slave on *.* to user1@192.168.0.1 identified by '123456';
grant replication client on *.* to user1@192.168.0.1;
grant super,reload,select on *.* to user1@192.168.0.1;

P.S. Off course you have to enable the binary log on the master server...

27 January 2011

mdadm vs lvm with reiserfs or ext3? - part 4

Step 1 - Installation
Clean install on a SATA disk of debian 5.0.8. During the instalation phase i've configured the LVM from 2 disks, SAMSUNG HD103SJ or Samsung F3 - 1TB each. Over it i've put reiserfs (ext3 tests are here)...

I have to add that this is a minimum installation, with no services installed.

Step 2 - DDx3
# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 20.5493 s, 52.3 MB/s

I noticed couple of CPU waitings (between 1-5%), which were not present with MD.

Step 3 - time DD
# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 20.4989 s, 52.4 MB/s


# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 20.1651 s, 53.2 MB/s


# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 19.7539 s, 54.4 MB/s

What i've noticed during step 3 is that CPU waiting was between 1-5%, which is better than ext3, but the writing speed was lower.

Step 4 - multiple files
# time for i in a b c d e f g h i j k l m n o p q r s t u v w x y z; do dd if=/dev/zero of=./$i bs=1024 count=100; done
...
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00184347 s, 55.5 MB/s
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00177727 s, 57.6 MB/s
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00181252 s, 56.5 MB/s
...


real    0m0.175s
user    0m0.020s
sys     0m0.144s

This was a little bit slower than ext3 over LVM... let's add more files!

# time for k in a b c d e f g h i j k l m n o p q r s t u v w x y z; do for j in a b c d e f g h i j k l m n o p q r s t u v w x y z; do for i in a b c d e f g h i j k l m n o p q r s t u v w x y z; do dd if=/dev/zero of=./$i$j$k bs=1024 count=100 >/dev/null 2>/dev/null; done; done; done


real    2m1.054s
user    0m30.818s
sys     1m26.609s

I noticed that the CPU was 100% in use, some waitings were present and bash was using ~20%... this is the worst until now!

Step 5 - LS
# while true; do ls -lah; done // i left it running for couple of minutes

I noticed that the CPU was ~90% idle, but no waiting. This is good, a little bit better than MD.

Step 6 - RM
# time rm *
real    0m2.976s
user    0m0.208s
sys     0m2.736s

I noticed that there was a lot of CPU waiting time wasted (up to 90%)...

mdadm vs lvm with reiserfs or ext3? - part 3

Step 1 - Installation
Clean install on a SATA disk of debian 5.0.8. During the instalation phase i've configured the LVM from 2 disks, SAMSUNG HD103SJ or Samsung F3 - 1TB each. Over it i've put ext3 (will try reiserfs later)...

I have to add that this is a minimum installation, with no services installed.

Step 2 - DDx3
# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 17.5612 s, 61.1 MB/s

I noticed couple of CPU waitings (between 11%-50%), which were not present with MD.

Step 3 - time DD
# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 17.6383 s, 60.9 MB/s


# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 16.943 s, 63.4 MB/s


# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 17.7065 s, 60.6 MB/s

What i've noticed during step 3 is that CPU waiting was over 50%, which is bad.

Step 4 - multiple files
# time for i in a b c d e f g h i j k l m n o p q r s t u v w x y z; do dd if=/dev/zero of=./$i bs=1024 count=100; done
...
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00124287 s, 82.4 MB/s
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00116396 s, 88.0 MB/s
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00111929 s, 91.5 MB/s
...


real    0m0.160s
user    0m0.044s
sys     0m0.096s

This was a little bit faster than ext3 over MD... let's add more files!

# time for k in a b c d e f g h i j k l m n o p q r s t u v w x y z; do for j in a b c d e f g h i j k l m n o p q r s t u v w x y z; do for i in a b c d e f g h i j k l m n o p q r s t u v w x y z; do dd if=/dev/zero of=./$i$j$k bs=1024 count=100 >/dev/null 2>/dev/null; done; done; done


real    1m48.386s
user    0m28.338s
sys     1m13.685s

I noticed that the CPU was 100% in use, some waitings were present and bash was using ~20%... this is good! Compared to MD is aproximately the same...

Step 5 - LS
# while true; do ls -lah; done // i left it running for couple of minutes

I noticed that the CPU was ~90% idle, but no waiting. This is good, a little bit better than MD.

Step 6 - RM
# time rm *
real    0m30.614s
user    0m0.224s
sys     0m1.832s

I noticed that there was a lot of CPU waiting time wasted (up to 90%)... MD was with 4 seconds faster!

26 January 2011

mdadm vs lvm with reiserfs or ext3? - part 2

Step 1 - Installation
Same as before... Clean install on a SATA disk of debian 5.0.8. During the instalation phase i've configured (very easy) the MD raid0 from 2 disks, SAMSUNG HD103SJ or Samsung F3 - 1TB each. Over it i've put ext3 (i tried with reiserfs here)...

I have to add that this is a minimum installation, with no services installed.

Step 2 - DDx3
# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 13.9927 s, 76.7 MB/s

I noticed a small waiting time while doing this... ~2-3%.

Step 3 - time DD
# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 14.004 s, 76.7 MB/s


# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 13.6571 s, 78.6 MB/s


# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 14.2677 s, 75.3 MB/s

What i've noticed during step 2 and step 3 is that the CPU usage was 100% like with reiserfs, but the writing speed was with ~50% higher (which is good).

Step 4 - multiple files
# time for i in a b c d e f g h i j k l m n o p q r s t u v w x y z; do dd if=/dev/zero of=./$i bs=1024 count=100; done
...
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00113659 s, 90.1 MB/s
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00114485 s, 89.4 MB/s
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00116158 s, 88.2 MB/s
...


real 0m0.182s
user 0m0.048s
sys 0m0.108s

This was way faster then reiserfs... let's add more files!

# time for k in a b c d e f g h i j k l m n o p q r s t u v w x y z; do for j in a b c d e f g h i j k l m n o p q r s t u v w x y z; do for i in a b c d e f g h i j k l m n o p q r s t u v w x y z; do dd if=/dev/zero of=./$i$j$k bs=1024 count=100 >/dev/null 2>/dev/null; done; done; done


real 1m54.091s
user 0m30.342s
sys 1m18.085s

I noticed that the CPU was 100% in use, but no waiting and bash was using ~20%... this is also very good! Compared with reiserfs... well it's less than one second for 17,604 files.

Step 5 - LS

# while true; do ls -lah; done // i left it running for couple of minutes

I noticed that the CPU was ~80% in use, but no waiting. This is good, but worst than reiserfs.

Step 6 - RM

# time rm *
real 0m26.316s
user 0m0.216s
sys 0m1.800s

I noticed that there was a lot of CPU waiting time wasted... reiserfs was much better here!

mdadm vs lvm with reiserfs or ext3?

Step 1 - Installation
Clean install on a SATA disk of debian 5.0.8. During the instalation phase i've configured (very easy) the MD raid0 from 2 disks, SAMSUNG HD103SJ or Samsung F3 - 1TB each. Over it i've put reiserfs (i will try later on with ext3 as well)...

I have to add that this is a minimum installation, with no services installed.

Step 2 - DDx3
# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 19.466 s, 55.2 MB/s


# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 19.8199 s, 54.2 MB/s


# dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 19.4925 s, 55.1 MB/s

Step 3 - time DD
# time dd if=/dev/zero of=/storage/test bs=1024 count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 19.4815 s, 55.1 MB/s


real 0m19.938s
user 0m0.572s
sys 0m18.209s

What i've noticed during step 2 and step 3 is that the CPU usage was 100%, with 0% waiting (which is good).

Step 4 - multiple files
# time for i in a b c d e f g h i j k l m n o p q r s t u v w x y z; do dd if=/dev/zero of=./$i bs=1024 count=100; done
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.0017858 s, 57.3 MB/s
... i'm not trying to waste your time here ...
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00160117 s, 64.0 MB/s


real 0m0.162s
user 0m0.044s
sys 0m0.116s

This was fast... let's add more files!

# time for k in a b c d e f g h i j k l m n o p q r s t u v w x y z; do for j in a b c d e f g h i j k l m n o p q r s t u v w x y z; do for i in a b c d e f g h i j k l m n o p q r s t u v w x y z; do dd if=/dev/zero of=./$i$j$k bs=1024 count=100 >/dev/null 2>/dev/null; done; done; done


...
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00186699 s, 54.8 MB/s
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.0018261 s, 56.1 MB/s
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.00179114 s, 57.2 MB/s
...


real 1m54.760s
user 0m29.006s
sys 1m22.965s

I noticed that the CPU was 100% in use, but no waiting and bash was using ~18%... this is also very good!

Step 5 - LS

# while true; do ls -lah; done // i left it running for couple of minutes

I noticed that the CPU was ~40% in use, but no waiting. This is good!

Step 6 - RM

# time rm *


real 0m2.955s
user 0m0.172s
sys 0m2.692

I've decided to stop using ZFS today...

I have 2 servers where i was using ZFS. One Debian and one FreeBSD. I managed to crash both of them by using ZFS. I have to agree that i was over-using it.

Same hardware for both machines: 1,7 Ghz Intel Pentium 4 CPU with 768 Mb RAM memory and 3x1TB Samsung disks. The test that i was performing was: download one torrent with high amount of peers (multiple disk writes in the same time), with high speed (i have up to 10 mbytes/s - that's 100 mbps connection), in the same time copy some movies over SMB or FTP and start a scrub while doing all of the above...

Debian crashed with kernel panic and didn't reboot itself, i had to manually push the button.
FreeBSD crashed with
panic: kmem_malloc(65536): kmem_map too small: 240738304 total allocated
cpuid = 0
Uptime: 5h18m4s
Cannot dump. Device not defined or unavailable.
Automatic reboot in 15 seconds - press a key on the console to abort
... and then rebooted itself.

I have to say that i tuned the FreeBSD ZFS because it was integrated in the kernel and i was expecting more from it. But i didn't touch the Debian one (fuse is not integrated in kernel). These are the settings that i've done:
vm.kmem_size=1152MB
vfs.zfs.arc_max=768MB
vfs.zfs.prefetch_disable=1
One additional thing that i've noticed is that the load on FreeBSD was 7-8 while writing with 10mbytes/s on the disk (FTP or SMB), while on Debian the load was 0.9-1.1.

Now i'm thinking to chose between:
- mdadm
- LVM

Will come back with test results of both.

25 January 2011

VPS downtime - GoDaddy conversation

My Request:
Browser : Firefox Version : 3.6.13
Permission to access server : no
Issue : Why was my VPS restarted without asking for permission first?


# uptime
16:58:10 up 1:55
1st answer:
Dear Silasi,


Thank you for contacting Online Support. In order to properly support this issue we will need a statement stating your permission to access your server for troubleshooting purposes will also help us expedite the troubleshooting process. We appreciate your understanding in this matter.


Please let us know if we can assist you in any way.


Regards,
Kim P.
Online Support
My reply:
Hi Kim,


You are NOT authorized to access my server!
I want you to inform me why was my VPS restarted without my knowledge. Also, i wish to know why i wasn't inform after the restart.This should provide enough information for you:


# uptime09:57:40 up 18:55, 1 user, load average: 0.00, 0.00, 0.00


I can give you a hint if you wish: check the master host!


Thank you,
Stefan.
2nd answer:
Dear Sir/Madam,


Thank you for contacting Server Support regarding your 'alwayshere.net' server.


Regarding the restart that occurred, I have checked and verified that the parent server was rebooted due to unexpected maintenance performed on the server. We do apologize for any inconvenience this has caused you. As it was unexpected, no email would have been sent to notify you that a reboot was going to occur. As well, at this time, our system is not setup to notify customers when a reboot has occurred. Once again, we apologize for the inconvenience.


Please contact us if you have any further issues.


Regards,


David J
Server Support

24 January 2011

ffmpeg - moov atom not found

While playing with the ffmpeg converter (at least i'm using it to convert videos from any format to MP4) i got the following error message:

ffmpeg - moov atom not found

After lots of reading over Google and installing libraries and other craps, i can say the following (which apply in my case):

  1. there is no missing library which is preventing you to convert that file
  2. ffmpeg didn't crash while converting your file
  3. check if the file is really there and ffmpeg has access to read it
  4. check if the file is 100% transferred to your machine (this was my issue... as the transfer was interrupted ~60%)
  5. if you're converting an MP4 or AVI file (not MOV) you can try AtomicParsley to fix your moov atoms. Check their website for more information.
Hopefully somebody will not waste half of day reading useless stuff over internet thanks to this post!

16 January 2011

rsync - /usr/bin/rsync: Argument list too long

After reaching the 33,000+ number of files in the same directory, it seems that the rsync process failed to work. Even the basic LS returns the same error: Argument list too long. The command that i've used was:
rsync -avz path_to_source/dir/* user@host:/path_to_destination/dir/
This is not working for lots of files under the same directory. The fix is:
rsync -avz path_to_source/dir user@host:/path_to_destination/

PHP - zipping remote files

I was thinking for a lot of time to do this... even if it has a huge negative part: network and CPU load. When you have a public album online, you should be able to download your original files from the server. If possible, via one big file - one archive.

The idea how to make it, came from here. Thank you Facebook!

I keep the original pictures and re-sized ones on different servers. When somebody would like to download one full album, he/she can click the download link on pictures4.net and watch how the archive is being created. At the end of the process, the archive link is displayed and the owner is allowed to download it... The original images are in the zip file, not the re-sized ones... videos are included as well!

Here is part of the code that i use to create the ZIP archive of remote files:


$zip = new ZipArchive();
if ($zip->open($zipName, ZipArchive::CREATE) === TRUE)
{
$query = 'select * from images';
$res=$sql->select($query);
$row = mysql_fetch_array($res);
if ($row['id'])
{
$_SESSION['zipAdded']++;
$zip->addFromString($row['name'],file_get_contents($webServer[mt_rand(0,count($webServer)-1)].$row['fileName']));
$zip->close();
echo $msg['wait_i_create_zip'].$row['name'].$msg['current_zip_size'].number_format((filesize($zipName)/1024/1024),2).' Mbytes

<img src="'.$webServer[mt_rand(0,count($webServer)-1)].$row['fileName'].'" alt="'.$row['name'].'" border="0">
</div>
</div>
<!-- refresh here -->';
}
else
{
echo '<div class="okMsg">
'.$msg['album_zipped_ok'].'<a href="zippedAlbums/'.str_replace($zipName).'">right click and save target as</a></div>
';
$_SESSION['zipDown']=1;

14 January 2011

nginx & php-fpm - upstream sent too big header while reading response header from upstream

Out of the nowhere, this morning my nginx server was returning the following error:
2011/01/14 02:11:10 [error] 20350#0: *1 upstream sent too big header while reading response header from upstream, client: XX.XX.XX.XX, server: pictures4.net, request: "GET /albums.php?page=list&parent=156 HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.pictures4.net"
I started googling for this error. Lots of answers were related with the proxy caching, but as long as i'm not using the proxy feature this is not relevant. Digging more, i found out (here) that there are some buffers for the fastcgi as well.

This is what the documentation says:
fastcgi_buffer_size

syntax: fastcgi_buffer_size the_size
default: fastcgi_buffer_size 4k/8k
context: http, server, location
This directive sets the buffer size for reading the header of the backend FastCGI process.

By default, the buffer size is equal to the size of one buffer in fastcgi_buffers. This directive allows you to set it to an arbitrary value.

fastcgi_buffers

syntax: fastcgi_buffers the_number is_size
default: fastcgi_buffers 8 4k/8k
context: http, server, location
This directive sets the number and the size of the buffers into which the reply from the FastCGI process in the backend is read.

By default, the size of each buffer is equal to the OS page size. Depending on the platform and architecture this value is one of 4k, 8k or 16k.
On Linux you can get the page size issuing:

getconf PAGESIZE

it returns the page size in bytes.

Example:
fastcgi_buffers 256 4k; # Sets the buffer size to 4k + 256 * 4k = 1028k

This means that any reply by the FastCGI process in the backend greater than 1M goes to disk. Only replies below 1M are handled directly in memory.
So, it seems that i didn't completely configured the nginx server... after adding my buffer values, everything was working just fine.

13 January 2011

nginx - MP4 streaming

One of the things that i love about nginx, except the fact that it's super fast, is that you can find documentation about anything you would like to do with it over the internet... and the documentation is good!

For example, i wanted to install a module, that will allow me to stream MP4 files (from pictures4.net off course) without making the user to download the full file. Image downloading a 400-700-1000 mbytes files in order to see just the beginning of it or the middle... not too helpful. So i took the youtube attitude and installed the MP4 module for nginx. For some reasons, this module returned during the compile phase the following error:

gcc -c -pipe  -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g  -D_LARGEFILE_SOURCE -DBUILDING_NGINX  -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules -I src/mail \
                -o objs/addon/src/ngx_http_h264_streaming_module.o \
                /home/kit/nginx_mod_h264_streaming-2.2.7//src/ngx_http_h264_streaming_module.c
In file included from /home/kit/nginx_mod_h264_streaming-2.2.7//src/ngx_http_h264_streaming_module.c:2:
/home/kit/nginx_mod_h264_streaming-2.2.7//src/ngx_http_streaming_module.c: In function ‘ngx_streaming_handler’:
/home/kit/nginx_mod_h264_streaming-2.2.7//src/ngx_http_streaming_module.c:158: error: ‘ngx_http_request_t’ has no member named ‘zero_in_uri’
make[1]: *** [objs/addon/src/ngx_http_h264_streaming_module.o] Error 1
make[1]: Leaving directory `/home/kit/nginx-0.9.3'
make: *** [build] Error 2

Very easy to fix it... just edit with your favorite editor the /path_to/nginx_mod_h264_streaming-2.2.7/src/ngx_http_streaming_module.c and remove the following lines... don't worry you won't beak anything:

/* TODO: Win32 */
if (r->zero_in_uri)
{
    return NGX_DECLINED;
}

Try "make" again and that's it... in order to inspire myself i used this source.
Let me know your comments!

11 January 2011

PHP-FPM & NGINX - losing sessions during HTTPS switch to HTTP

I had the first problem with a different domain (which will be transformed to a subdomain of alwayshere.net soon) hosted on the same VPS. I'm speaking about pictures4.net. The idea was to use the register and login functions via HTTPS then redirect the user to HTTP and keep the session information.

I read a lot of pages how to keep the session between pages, what's the main issue between HTTP and HTTPS, what options you should activate in the php.ini file... well i have to say that lots of this internet pages just SUCKS! Best discussion that i've found is this one.

Bottom line, in order to keep the session between HTTP and HTTPS you need to do nothing!

Still, if your sessions are getting lost after switching your connections, you should check the following (maybe more, but this is what I've did):

- the user under which your web server runs should have the privileges to read the sessions files (this applies if you're using files to keep the session info and not DB... i don't know why, i can't explain it... but this is the issue that i had. After i've set the nginx user to the same user as PHP-FPM session worked for me)
- session_start() should be at the beginning of each PHP page in order to keep the session between pages
- PHP should be able to read/write to the session_path defined in php.ini
- cookie_secure should be disabled (value = 0, default)
- check suhosin.session.cryptdocroot to be switched off

GoDaddy - 1st downtime

I was transferring some files from the VPS to my dedicated servers... when the connection just dropped. Now i have 2 ideas in mind:

1. GoDaddy closed my VPS for some reasons
2. The dedicated server where my VPS is hosted died for some other reasons

Whatever it is, i just hope that they will keep my 99% availability agreed when i bought the VPS...

Update 1: Server is back online (06:47 AM) after a downtime of 17 minutes.
Update 2: Server is down again (06:49 AM). I'm going to sleep, my dedicated servers will monitor this...

06 January 2011

Create GoDaddy domain certificate

The idea is to create https domain name signed by godaddy. It's not complicated at all.
  1. Generate your certificate request
  2. # openssl genrsa -des3 -out alwayshere.net.key 2048
    Enter PEM pass phrase: this_is_secret Verifying password - Enter PEM pass hrase: this_is_secret

    # openssl req -new -key alwayshere.net.key -out alwayshere.net.csr
    Country Name (2 letter code) [GB]:CZ
    State or Province Name (full name) [Berkshire]:Brno
    Locality Name (eg, city) [Newbury]:Brno
    Organization Name (eg, company) [My Company Ltd]:Always Here
    Organizational Unit Name (eg, section) []:
    Common Name (eg, your name or your server's hostname) []:Silasi Stefan
    Email Address []:silasistefan@gmail.com
    Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:
    An optional company name []:

    And now you have the CSR - which is the file required by GoDaddy.

  3. Upload it to godaddy
  4. Login here and paste your CSR on your domain...
  5. Download and install the domain certificate
GoDaddy will generate your own certificate that you can download in a zip file and install...

05 January 2011

DNS up and running...

GoDaddy has a restriction on all DNS servers that are hosted with them (from here):
Go Daddy prohibits the running of a public recursive DNS service on any Go Daddy server. All recursive DNS servers must be secured to allow only internal network access or a limited set of IP addresses. Go Daddy actively scans for the presence of public DNS services and reserves the right to remove any servers from the network that violate this restriction.

In order to do this, you should add to your named.conf (if your using named/bind) the following configuration lines:
allow-query { any; };
allow-recursion { 127.0.0.0/8; any.other.ip.address/mask; };

The "allow-query" will (guess what) allow all users to query the DNS server, while the "allow-recursion" will limit the queries to the domains that are not hosted on this DNS to the IPs specified in the list. Simple huh?

By the way, the DNS server host on the GoDaddy account is now up and running! http://intodns.com/alwayshere.net confirms that everything is ok...

04 January 2011

I'm alive...

So we are alive... finally this project is alive on a GoDaddy VPS with 1GB ram and 15GB disk space. I'm wondering how this will be in the next 5-10 years or so... anyway, the blog address is alwayshere.info and the website address is alwayshere.net. Why don't you bookmark them?

As an idea... what do you wish to have always here?