Wednesday, September 29, 2010

what's the difference between <a href="javascript:expand()"> and <a href="#" onclick="javascript:expand()">

what's the difference between <a href="javascript:expand()"> and <a href="#" onclick="javascript:expand()">


this is the outdated way and will cause problems
<a href="javascript:expand()">

and the reason this <a href="#" onclick="javascript:expand()"> scrolls to the top is
your not return false like so

<a href="#" onclick="expand();return false;">


=====================

You can use href and onclick together to provide the same functions
to people who have javascript turned off. e.g.
<a href="some.jsp?expand=1" onclick="expand(); return false;">[+]</a>

If onclick returns false, the browser never even looks at the href attribute, and so it is not followed.
Suppose a user has javascript disabled; in this case the browser ignores the onclick handler
and loads the url given in the href - which in this case is supposed to send back HTML
showing the tab expanded.

So, as jaysolomon says, you must return false if you do not want the browser to follow the href.

The other way of coding it is to ensure the expand function return false, but note, even in this case your onclick handler should include the keyword return:

<script type="text/javascript">
function expand(sec) {
// do your stuff...
return false;
}
</script>

<a href="#" onclick="return expand()">[+]</a>

=====================

all say will cause problems, but nobody says what kind of problem :)

=====================

if you have say for example a animated gif in a page and you click the javascript: protocol link the ani will quit running and you have to refresh the page to start again.

you see?

Reference:
http://www.experts-exchange.com/Web/Web_Languages/JavaScript/Q_20915622.html

Thursday, September 23, 2010

Converting Texfield into autocomplete using Drupal's Form API

Inside Drupal APIs we can find separate set of APIs that deals with Forms, their elements and properties. Forms API reference guide has all the elements, properties and complete reference to the forms control structure where you can find easily a what properties all forms element carry. For more details please see Forms API Quick Start Guide.

Back to adding autocomplete feature to any textfield in Drupal, this textfield can be CCK or any core/contrib module generated textfield.
Autocomplete involves the program predicting a word or phrase that the user wants to type in without the user actually typing it in completely. This feature is effective when it is easy to predict the word being typed based on those already typed >>> (Wikipedia)
We will be creating a module which will implement hook_form_alter to alter form elements so that we can easily attach autocomplete property to form's textfield. Lets name this module "my_autocomplete", In my_autocomplete.module file add following code:
function my_autocomplete_form_alter(&$form, $form_state, $form_id) {

  if ($form_id == 'any_node_form') {

    //$form['field_cck_textfield'][0]['value']['#autocomplete_path'] = 'my/autocomplete';

    $form['#after_build'] = array('_my_autocomplete_form_afterbuild');

   

  }

}
function _dealrant_coupons_form_afterbuild($form, &$form_state) {

  $form['field_cck_textfield'][0]['value']['#autocomplete_path'] = 'my/autocomplete';

   return $form;

}
function my_autocomplete_menu() {

  $items = array();

  $items['my/autocomplete'] = array(

      'title' => '',

      'page callback' => '_my_autocomplete_terms',

      'access arguments' => array('access content'),

      'type' => MENU_CALLBACK,

    );

  return $items;

}
function _my_autocomplete_terms($string = '') {

  $matches = array();

  if ($string) {

    $result = db_query_range("SELECT name FROM {term_data} WHERE LOWER(name) LIKE LOWER('%s%%')", $string, 0, 5);

    while ($data = db_fetch_object($result)) {

      $matches[$data->title] = check_plain($data->title);

    }

  }

  print drupal_to_js($matches);

  exit;

}
Things to remember: Use form's #after_build property in hook_form_alter only if you want to manipulate forms elements after all modules processing on that particular form is done... thats why its called after_build. This is usually used when we are going to alter some CCK textfield.

Reference: http://qandeelaslam.com/blogs/cck/converting-texfield-autocomplete-using-drupals-form-api

Tuesday, September 21, 2010

Cache a large array: JSON, serialize or var_export?

Cache a large array: JSON, serialize or var_export?
Monday 06 July 2009 10:30 While developing software like our framework you will need to cache a large data array to a file at some point sooner or later. At such a point you need to choose what caching method you will be using. In this article I will compare three methods: JSON, serialization and var_export() combined with include().
By Taco van den Broek

Too curious? Jump right to the results!
JSON

The JSON method uses the json_encode and json_decode functions. The JSON-encoded data is stored as is into a plain text file.
Code example
view plaincopy to clipboardprint?

1. // Store cache
2. file_put_contents($cachePath, json_encode($myDataArray));
3. // Retrieve cache
4. $myDataArray = json_decode(file_get_contents($cachePath));

// Store cache
file_put_contents($cachePath, json_encode($myDataArray));
// Retrieve cache
$myDataArray = json_decode(file_get_contents($cachePath));

pros

* Pretty easy to read when encoded
* Can easily be used outside a PHP application

cons

* Only works with UTF-8 encoded data
* Will not work with objects other than instances of the stdClass class.

Serialization

The serialization method uses the serialize and unserialize functions. The serialized data is, just like the JSON data, stored as is into a plain text file.
Code example
view plaincopy to clipboardprint?

1. // Store cache
2. file_put_contents($cachePath, serialize($myDataArray));
3. // Retrieve cache
4. $myDataArray = unserialize(file_get_contents($cachePath));

// Store cache
file_put_contents($cachePath, serialize($myDataArray));
// Retrieve cache
$myDataArray = unserialize(file_get_contents($cachePath));

pros

* Does not need the data to be UTF-8 encoded
* Works with instances of classes other than the stdClass class.

cons

* Nearly impossible to read when encoded
* Can not be used outside of a PHP application, without having to write custom functions

Var_export

This method 'encodes' the data using var_export and loads the data using the include statement (no need for file_get_contents!). The encoded data needs to be in a valid PHP file so we wrap the encoded data in the following PHP code:
view plaincopy to clipboardprint?

1. 2. return /*var_export output goes here*/;

return /*var_export output goes here*/;

Code example
view plaincopy to clipboardprint?

1. // Store cache
2. file_put_contents($cachePath, " 3. // Retrieve cache
4. $myDataArray = include($cachePath);

// Store cache
file_put_contents($cachePath, " // Retrieve cache
$myDataArray = include($cachePath);

pros

* No need for UTF-8 encoding
* Is very readable (assuming you can read PHP code)
* Retrieving the cache uses one language construct instead of two functions
* When using an opcode cache your cache file will be stored in the opcode cache. (This is actually a disadvantage, see the cons list).

cons

* Needs PHP wrapper code.
* Can not encode Objects of classes missing the __set_state method.
* When using an opcode cache your cache file will be stored in the opcode cache. If you do not need a persistant cache this is useless, most opcode caches support storing values in the shared memory. If you don't mind storing the cache in memory, use the shared memory without writing the cache to disk first.
* Another disadvantage is that your stored file has to be valid PHP. If it contains a parse error (which could happen when your script crashes while writing the cache) your application will not work anymore.

Benchmark

In my benchmark I used 5 different data sets with different sizes (measured in memory usage): 904B, ~18kB, ~250kB, ~4.5MB and ~72.5MB. For each of these data sets I did the following routine for each encoding method:

1. Encode the data 10 times
2. Calculate the string length of the encoded data
3. Decode the encoded data 10 times

Results

Yay, results! In the result tables you see the length of the encoded string, the total time used for encoding and the total time used for decoding. The benchmark was done on my laptop: 2.53GHz, 4GB, Ubuntu linux, PHP 5.3.0RC4.
904 B array
JSON Serialization var_export / include
Length 105 150 151
Encoding 0.0000660419464111 0.00004696846008301 0.00014996528625488
Decoding 0.0011160373687744 0.00092697143554688 0.0010221004486084
18.07 kB array JSON Serialization var_export / include
Length 1965 2790 3103
Encoding 0.0005040168762207 0.00035905838012695 0.001352071762085
Decoding 0.0017290115356445 0.0011298656463623 0.0056741237640381
290.59 kB array JSON Serialization var_export / include
Length 31725 45030 58015
Encoding 0.0076849460601807 0.0057480335235596 0.02099609375
Decoding 0.014955997467041 0.010177850723267 0.030472993850708
4.54 MB array JSON Serialization var_export / include
Length 507885 720870 1059487
Encoding 0.13873195648193 0.11841702461243 0.38376498222351
Decoding 0.29870986938477 0.21590781211853 0.53850317001343
72.67 MB array JSON Serialization var_export / include
Length 8126445 11534310 19049119
Encoding 2.3055040836334 2.7609040737152 6.2211949825287
Decoding 4.5191099643707 8.351490020752 8.7873070240021

We've done the same benchmark on eight other machines including Windows and Mac OS machines and some webservers running Debian. Some of these machines had PHP 5.2.9 installed, others already switched to 5.3.0. All had the same (relative) results, except for a macbook in which serialize was faster encoding the largest dataset.
Conclusion

As you can see the var_export (without opcode cache!) method doesn't come out that well and serialize seems to be the overall winner. What bothered me though was the largest dataset in which JSON became faster than serialize. Wondering whether this was a glitch or a trend I fired up my OpenOffice spreadsheet and created some charts:

The charts show the relative speed of each method compared to the fastest method (so 100% is the best a method can do). As you can see both JSON and var_export become relatively faster when the data set gets big (arrays of 70MB and bigger? Maybe you should reconsider the structure of your data set :)). So when using a sane sized data array: use serialize. When you want to go crazy with large data sets: use anything you like, disk i/o will become your bottleneck.

Tags

«Back
Reactions on "Cache a large array: JSON, serialize or var_export?"

garfix
Placed on: 07-09-2009 16:30 [Quote] Quote
Patrick van Bergen
User icon
to be continuum

Good job, Taco. Wish php.net had these kinds of stats.
Geert
Placed on: 08-04-2009 10:16 [Quote] Quote

Very useful benchmarks. Thanks.
Ries van Twisk
Placed on: 08-13-2009 04:52 [Quote] Quote

Do you happen to have any results where you have used an opcode cache?

I can only imagine that with an upcode cache the var_export method is faster. Pure theoretically this would mean that with an include the data is 'there' and shouldn't have to be parsed anymore.

Ries
Peter Farkas
Placed on: 09-29-2009 16:56 [Quote] Quote

This style is the one I like so much!
Thank you!
Brilliant work!
Vasilis
Placed on: 01-15-2010 10:27 [Quote] Quote

Great info man... I like benchmarks! Thank you
Nice work!
Placed on: 03-24-2010 17:35 [Quote] Quote

Thanks a million - refreshing to see solid content.

Concise and well documented, perfect.
Frank Denis
Placed on: 05-13-2010 20:47 [Quote] Quote

If speed and size matters, igbinary beats all of these hands down: http://opensource.dynamoid.com/

Reference: http://techblog.procurios.nl/k/618/news/view/34972/14863/Cache-a-large-array-JSON-serialize-or-var_export.html

Igbinary is a drop in replacement for the standard PHP serializer

Igbinary is a drop in replacement for the standard PHP serializer. Instead of time and space consuming textual representation, igbinary stores PHP data structures in a compact binary form. Savings are significant when using memcached or similar memory based storages for serialized data.

But where does the name "igbinary" come from? There was once a similar project called fbinary but it has disappeared from the Internet. Its architecture wasn't very clean either. IG is an abbreviation for a Finnish social networking site IRC-Galleria.

Reference: http://opensource.dynamoid.com/

Tuesday, September 14, 2010

nohup

nohup is a POSIX command to ignore the HUP (hangup) signal, enabling the command to keep running after the user who issues the command has logged out. The HUP (hangup) signal is by convention the way a terminal warns depending processes of logout.

nohup is most often used to run commands in the background as daemons. Output that would normally go to the terminal goes to a file called nohup.out if it has not already been redirected. This command is very helpful when there is a need to run numerous batch jobs which are inter-dependent.

nohup is a low-level utility simply configuring a command to ignore a signal. As seen below, nohup is very far from being a full-featured batch system solving all the problems of running programs asynchronously.

Example
The first of the commands below starts the program abcd in the background in such a way that the subsequent log out does not stop it.

$ nohup abcd &
$ exit
Note that these methods prevent the process from being sent a 'stop' signal on logout, but if input/output is being received for these files[which files?], they will still hang the terminal[1] . See Overcoming Hanging, below.

nohup is often used in combination with the nice command to run processes on a lower priority.

$ nohup nice abcd &
[edit] Existing jobs
Some shells (e.g. bash) provide a shell builtin that may be used to prevent SIGHUP being sent or propagated to existing jobs, even if they were not started with nohup. In bash, this can be obtained by using disown -h job; using the same builtin without arguments removes the job from the job table, which also implies that the job will not receive the signal. Another relevant bash option is shopt huponexit, which automatically sends the HUP signal to jobs when the shell is exiting normally.

[edit] Overcoming hanging
Nohuping backgrounded jobs is typically used to avoid terminating them when logging off from a remote SSH session. A different issue that often arises in this situation is that ssh is refusing to log off ("hangs"), since it refuses to lose any data from/to the background job(s)[2][3]. This problem can also be overcome by redirecting all three I/O streams:

nohup ./myprogram > foo.out 2> foo.err < /dev/null &
Also note that a closing SSH session does not always send a HUP signal to depending processes. Among others, this depends on whether a pseudo-terminal was allocated or not[4].

Wednesday, September 8, 2010

Thursday, September 2, 2010

ERROR 1135 (00000): Can't create a new thread (errno 35); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug

Site off-line
The site is currently not available due to technical problems. Please try again later. Thank you for your understanding.


--------------------------------------------------------------------------------

If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider.

The mysqli error was: Can't create a new thread (errno 35); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug.


- ERROR 1135 (00000): Can't create a new thread (errno 35); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug


- Stopping mysql
mysql Waiting for PIDS

- 100330 11:20:02 [Note] Error reading relay log event: slave SQL thread was killed
100330 11:20:10 [ERROR] Can't create thread to kill server

http://www.haqthegibson.com/article/34


==========

/boot/defaults/loader.conf
/boot/loader.conf
kern.maxfiles


> Here are some other kernel variables that may be of interest:
> kern.maxfiles: 15000
> kern.maxfilesperproc: 7408
> kern.ipc.maxsockets: 8232

http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/configtuning-kernel-limits.html

http://www.krellis.org/unix-stuff/mysql-freebsd-threads.html
============
[mysqld_safe]
open-files-limit=10240
============


The command `fstat -p X` where X is the PID of the dccifd daemon might
be illuminating.

# fstat -p `cat /usr/local/mysql/bsd-sql.nai360.com.pid`

# fstat -p `cat /usr/local/mysql/china-en.nai360.com.pid`

# pstat -T

# limits

# sysctl -a | grep -i kern.max

# sysctl -a | grep -i file

# sysctl -a|grep -i mem

# sysctl -a | grep -i vm

# cat /etc/my.cnf | grep -i open

# freecolor -t -m -o

mysql> SHOW STATUS LIKE '%open%';

mysql> SHOW STATUS LIKE '%file%';

mysql> SHOW STATUS LIKE '%thread%';

mysql> SHOW STATUS LIKE '%conn%';

mysql> SHOW STATUS WHERE Variable_name IN (
'Open_files',
'Open_tables',
'Threads_connected',
'Threads_created',
'Threads_running',
'Max_used_connections'
);

mysql> SHOW GLOBAL VARIABLES LIKE ‘binlog_cache_size’;

mysql> SHOW GLOBAL STATUS LIKE ‘Binlog%’

mysql> SHOW GLOBAL VARIABLES LIKE '%open%';

mysql> SHOW GLOBAL VARIABLES LIKE '%max%';

mysql> SHOW GLOBAL VARIABLES LIKE '%thread%';

mysql> SHOW GLOBAL VARIABLES LIKE '%innodb%';

mysql> SHOW GLOBAL VARIABLES WHERE Variable_name IN (
'innodb_buffer_pool_size',
'innodb_open_files',
'innodb_thread_concurrency',
'max_connections',
'innodb_open_files',
'open_files_limit',
'table_open_cache'
);

mysql> SHOW FULL PROCESSLIST\G

# mtop
http://mtop.sourceforge.net/mtop.html

# mkill

# mysqladmin -u root -p extended-status

# mysqladmin kill
===
There's nothing in /boot/loader.conf or /etc/sysctl.conf about the kern.maxproc or anything like that.

===
what is your max_connections Setting? For every connection one (or more) files are to be opened. Try to reduce. Also table_cache Setting can influence the number of open files. See mysql manual for more help.

Both max_connections and table_cache have been tuned to be as small as possible while leaving us some headroom for spikes. Thanks for the answer. – Conor McDermottroe Nov 24 at 14:21
http://serverfault.com/questions/87333/max-open-files-mysql-5-0-on-freebsd-7-0

http://dev.mysql.com/doc/refman/5.0/en/not-enough-file-handles.html

http://lists.mysql.com/mysql/217340

=======
MySQL Thread Problems On FreeBSD
Last Updated: 4/21/05

At DynDNS, we're big fans of Open Source technologies. One of the major parts of our system is the MySQL Database Server. MySQL is a wonderful piece of software, and serves us and our customers very well by providing the backend database for every hit on our website.

Running MySQL effectively in a multi-processor environment on FreeBSD 4.x requires the use of LinuxThreads, because FreeBSD 4.x's native pthreads cannot scale across CPUs. MySQL provides binaries that are linked against LinuxThreads, which allows each thread to appear to the OS as a separate process, allowing them to run on separate CPUs.

Several months ago we started running into problems where we couldn't create more than about 700 threads. We would always see the following error:

Can't create a new thread (errno 35); if you are not out of available
memory, you can consult the manual for a possible OS-dependent bug
We searched and searched and couldn't come up with a solution. Eventually we simply re-tuned our application so it did not need as many threads, largely by turning down the MySQL wait_timeout variable, causing idle threads to timeout more quickly, since our application is smart enough to re-connect a thread that has been disconnected by the server.

Recently, we decided it was time to purchase a MySQL Network subscription, both as a way of contributing back to the development of MySQL, and for the technical support resources that it would make available to us. One of the issues we requested help on was this thread issue.

After some troubleshooting steps back and forth, Sinisa Milivojevic provided us with the solution we needed. There were actually two problems that needed to be corrected. First, it turns out that the LinuxThreads version used in the FreeBSD ports system allocates a static amount of stack memory to every thread that is created, and that this is hard compiled as 2MB per thread! Second, this LinuxThreads version also hard-codes the maximum number of threads per process to 1024.

With this information (and patches) in hand, we made the appropriate changes, re-compiled and re-installed LinuxThreads, re-ran the test code that we had gotten from MySQL, and lo and behold, we were able to create up to 4096 threads without a problem. A quick re-start of our MySQL daemon later, and our test script was able to use the full 2048 maximum connections specified in our configuration file.

The two patches below change the STACK_SIZE to 128K, and the PTHREAD_THREADS_MAX to 4096, which should be sufficient for most people's needs. (Note: you may need a larger STACK_SIZE if your threaded applications are stack-intense - 128K is the default used by MySQL, though, and generally ought to be enough for most purposes.) These patches are against the FreeBSD port devel/linuxthreads with version number 2.2.3_16.

internals.patch
sysdeps.patch
To use these patches, do the following (as root, or whatever user on your system has the appropriate privileges to mess with ports):

cd /usr/ports/devel/linuxthreads
make clean
make patch (this will download, extract, and run the port-specific patches on the LinuxThreads distribution)
cd work/linuxthreads-2.2.3_16
patch -p0 < /path/to/internals.patch
patch -p0 < /path/to/sysdeps.patch
cd ../..
make all
make install
Viola, you will now have LinuxThreads installed with the appropriate patches.

Many thanks to Sinisa Milivojevic and Victoria Reznichenko of MySQL for helping us to find and solve this problem, and for giving me permission to post the solution here. We highly recommend MySQL, and if you need top-notch technical support for MySQL, there's no better place to get it than direct from the source through the MySQL Network.



http://dev.mysql.com/doc/refman/5.1/en/server-status-variables.html#statvar_Opened_files

http://dev.mysql.com/doc/refman/5.1/en/server-status-variables.html#statvar_Open_files

http://forums.freebsd.org/showthread.php?t=1553

http://serverfault.com/questions/87333/max-open-files-mysql-5-0-on-freebsd-7-0


http://unix.derkeiler.com/Mailing-Lists/FreeBSD/questions/2008-09/msg01325.html

========
DB Servers: One Master, Two Read Only (replication)
4 GB of Memory on each server
FreeBSD 6.3-RELEASE-p3 FreeBSD 6.3-RELEASE-p3
MySQL 5.0.1



Here are some relevent items from my.cnf:
- set-variable = max_connections=1000
- set-variable = key_buffer_size=384M
- set-variable = read_buffer_size=64M
- set-variable = read_rnd_buffer_size=32M
- set-variable = thread_cache_size=20


You're shooting yourself in the foot:
1000*2MB=2G for thread stack
+ 384M
+ 1000 * (sort_buffer_size+64M+binlog_cache_size_innodb)

You don't have that much memory.
http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html

There's a similar formula for MyISAM, but can't seem to find it at the moment.

Hi Vince,

Thanks for the advice. We have already raised the memory limits:

kern.maxdsiz="1843M" # 1.8GB
kern.dfldsiz="1843M" # 1.8GB

Any other ideas?

http://unix.derkeiler.com/Mailing-Lists/FreeBSD/questions/2008-09/msg01358.html



=====

http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html

==========

Re: Maximum memory allocation per process

--------------------------------------------------------------------------------

From: Jeremy Chadwick <koitsu@xxxxxxxxxxx>
Date: Thu, 22 May 2008 06:38:19 -0700

--------------------------------------------------------------------------------
On Thu, May 22, 2008 at 11:00:37PM +1000, Adrian Thearle wrote:

I have a problem with a perl script running out of memory. From my googling
I have found that perl itself does not seem to impose any memory limits,
and I have check ulimit and login.conf for any userclass limitations but
found nothing that seems to be limiting my memory.

I have 128MBytes of RAM and a 2Gbyte swap partition.

I am currently running
FreeBSD albert 6.2-STABLE FreeBSD 6.2-STABLE #11: Sun Sep 2 00:45:05 EST
2007
which I guess isn't exactly the latest... but the same thing happens on my
REL7.0 Box

The process (imapsync in this case) runs out of ram at pretty much 512MB. I
read on a forum that BSD 6 imposes such a limit of 512MB per process, but i
have found no where to tune this, or even see what it is.


You need to modify some kernel settings via /boot/loader.conf and
reboot. Here's what we use on our production RELENG_6 and RELENG_7
boxes:

# Increase maximum allocatable memory on a process to 2GB.
# (We don't choose 3GB (our max RAM) since that would
# exhaust all memory, and result in a kernel panic.)
# Set default memory size as 768MB.
# Maximum stack size is 256MB.
#
kern.maxdsiz="2048M"
kern.dfldsiz="768M"
kern.maxssiz="256MB"


I have also read that there are two sysctl namely, kern.maxdsiz and
kern.maxssiz, that can tune memory allocation but what happend to them in
Freebsd 6.


These are not sysctls, they are kernel settings. They exist on both
RELENG_6 and RELENG_7.

http://unix.derkeiler.com/Mailing-Lists/FreeBSD/hackers/2008-05/msg00258.html

http://dev.mysql.com/doc/refman/5.1/en/server-status-variables.html#statvar_Connections