Tuesday, December 26, 2017

What's the simplest way to strip trailing whitespace from all lines in a file?

What's the simplest way to strip trailing whitespace from all lines in a file?

The "simplest" way is to just use :substitute:

:%s/\s\+$//e

:%s to run :substitute over the range %, which is the entire buffer.
\s t match all whitespace characters.
\+ to repeat them 1 or more times.
$ to anchor at the end of the line.
The e flag to not give an error if there is no match (i.e. the file is already without trailing whitespace).

However, this is probably not the "best" way as it causes two side-effects:

1. it moves the cursor to the last match;
2. it resets the last search term.

You can fix both items by turning this into a function:

fun! TrimWhitespace()
    let l:save = winsaveview()
    %s/\s\+$//e
    call winrestview(l:save)
endfun

And then use it like:

:call TrimWhitespace()

The winsaveview() will save the current "view", which includes the cursor position, folds, jumps, etc. The winrestview() at the end will restore this from the saved variable.
The last-used search term is automatically restored after leaving a function, so we don't have to do anything else for this.
Since this is somewhat annoying to type :call all the time, you can define a command:

command! TrimWhitespace call TrimWhitespace()

Which can be be used without the :call:

:TrimWitespace

And you can of course bind it to a key:

:noremap <Leader>w :call TrimWhitespace()<CR>

Some people like to automatically do this before they write a file to disk, like so:

autocmd BufWritePre * :call TrimWhitespace()

I don't like it, as some formats require trailing whitespace (such as Markdown), and on some other occasions you even want trailing whitespace in your code (such as formatting an email, and using the -- marker to indicate the start of a signature).

Reference:

https://vi.stackexchange.com/questions/454/whats-the-simplest-way-to-strip-trailing-whitespace-from-all-lines-in-a-file?newreg=dfad440fc0e14ac1b0afde5ce4e41e27

Sunday, December 24, 2017

SSH clients for Windows

SSH clients for Windows

MobaXterm supports MOSH

https://mobaxterm.mobatek.net/

Xshell 5

https://www.netsarang.com/products/xsh_overview.html

How to get Git to clone into current directory

How to get Git to clone into current directory

# git init .
# git remote add origin <repository-url>
# git pull origin master

Reference:

https://stackoverflow.com/questions/9864728/how-to-get-git-to-clone-into-current-directory/16811212

Vim restore opened files

Vim restore opened files

Add these in ~/.vimrc:

# vim ~/.vimrc

" Save the current vim sessions
noremap <F2> :mksession! ~/vim_session <cr>

" Load the saved vim sessions
noremap <F3> :source ~/vim_session <cr>

Reference:

https://stackoverflow.com/questions/1416572/vi-vim-restore-opened-files

Turning off auto indent when pasting text into vim

Method 1:

:set paste
:set nopaste

or in ~/.vimrc

set pastetoggle=

Method 2:

:r! cat

Then ctrl-insert to paste
Then enter
Then ctrl-d (twice) to end of file.

Monday, December 11, 2017

Web sessions

Web sessions

The most common method is to store a token, or session ID, in a browser cookie. Based on that token, the server then loads the session data from a data store. Over the years, a number of best practices have evolved that make cookie-based web sessions reasonably safe. The OWASP organization lists a number of recommendations aimed at reducing common attacks such as session hijacking or session fixation.

Sticky sessions

https://stackoverflow.com/questions/10494431/sticky-and-non-sticky-sessions/13641836#13641836

Reference:

https://blog.gopheracademy.com/advent-2017/web-sessions-and-users/

https://en.wikipedia.org/wiki/Session_hijacking

https://en.wikipedia.org/wiki/Session_fixation

https://github.com/rivo/sessions

https://en.wikipedia.org/wiki/HTTP_cookie#Cookie_theft_and_session_hijacking

https://en.wikipedia.org/wiki/Cross-site_request_forgery

https://en.wikipedia.org/wiki/Representational_state_transfer#Stateless

https://en.wikipedia.org/wiki/Source_routing

Saturday, December 9, 2017

perl: warning: Setting locale failed.

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

# locale-gen
# dpkg-reconfigure locales

Position absolute but relative to parent

Position absolute but relative to parent


#father {
   position: relative;
}

#son1 {
   position: absolute;
   top: 0;
}

#son2 {
   position: absolute;
   bottom: 0;
}

Reference:

https://stackoverflow.com/questions/10487292/position-absolute-but-relative-to-parent

How to place div side by side

How to place div side by side

Method 1:

.container{
    display: flex;
}
.fixed{
    width: 200px;
}
.flex-item{
    flex-grow: 1;
}

<div class="container">
  <div class="fixed"></div>
  <div class="flex-item"></div>
</div>

Method 2:

<div style="width: 100%; overflow: hidden;">
    <div style="width: 600px; float: left;"> Left </div>
    <div style="margin-left: 620px;"> Right </div>
</div>

Method 3:

<div style="width: 100%; display: table;">
    <div style="display: table-row">
        <div style="width: 600px; display: table-cell;"> Left </div>
        <div style="display: table-cell;"> Right </div>
    </div>
</div>

Reference:

https://stackoverflow.com/questions/2637696/how-to-place-div-side-by-side

docker ubuntu /bin/sh: 1: locale-gen: not found

docker ubuntu /bin/sh: 1: locale-gen: not found

In Dockerfile:

RUN apt-get install -y locales
RUN locale-gen en_US.UTF-8

Reference:

https://stackoverflow.com/questions/39760663/docker-ubuntu-bin-sh-1-locale-gen-not-found

get data between html and

get data between html <tag> and </tag>

$regex = '`<code>(.*?)</code>`s';

Reference:

https://stackoverflow.com/questions/9253027/get-everything-between-tag-and-tag-with-php/9253072

PDO Transaction syntax with try catch

PDO Transaction syntax with try catch


if ($dbh->beginTransaction()) 
{
  try 
  {
    //your db code
    $dbh->commit();
  } 
  catch (Exception $ex) 
  {
    if ($dbh->inTransaction())
    {
       $dbh->rollBack();
    }        
  }
}

Reference:

https://stackoverflow.com/questions/24408434/pdo-transaction-syntax-with-try-catch

How to raise PDOException?

How to raise PDOException?

Set the attribute PDO::ATTR_ERRMODE to PDO::ERRMODE_EXCEPTION, as soon as you init your pdo object:

$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

XMLHttpRequest cannot load. No 'Access-Control-Allow-Origin' header is present on the requested resource

XMLHttpRequest cannot load. No 'Access-Control-Allow-Origin' header is present on the requested resource

APIs are the threads that let you stitch together a rich web experience. But this experience has a hard time translating to the browser, where the options for cross-domain requests are limited to techniques like JSON-P (which has limited use due to security concerns) or setting up a custom proxy (which can be a pain to set up and maintain).

Cross-Origin Resource Sharing (CORS) is a W3C spec that allows cross-domain communication from the browser. By building on top of the XMLHttpRequest object, CORS allows developers to work with the same idioms as same-domain requests.

The use-case for CORS is simple. Imagine the site alice.com has some data that the site bob.com wants to access. This type of request traditionally wouldn’t be allowed under the browser’s same origin policy. However, by supporting CORS requests, alice.com can add a few special response headers that allows bob.com to access the data.

As you can see from this example, CORS support requires coordination between both the server and client. Luckily, if you are a client-side developer you are shielded from most of these details. The rest of this article shows how clients can make cross-origin requests, and how servers can configure themselves to support CORS.

Method 1:

On the remote server, add:

<?php
header('Access-Control-Allow-Origin: http://symfony.cent-dev.local');
#header('Access-Control-Allow-Headers: X-Requested-With');
#header('Access-Control-Allow-Methods: GET,POST,PUT,DELETE,OPTIONS');
?>

Then, go observe the response in the browser at client side. You will see the three lines above.

Method 2:

On the remote server, edit your Apache configuration file:

<ifModule mod_headers.c>
    Header set Access-Control-Allow-Origin: http://symfony.cent-dev.local
</ifModule>

Note: you can replace http://symfony.cent-dev.local to a wildcard *.

Note: and don't forget to load module: a2enmod headers

Method 3:

Add a proxy script on your server, ex: proxy.php then having your client side script to access the proxy.php script.

The proxy.php script then send the request to the remote server.

Method 4:

On your server, set up proxy on Apache:

<LocationMatch "/api">
   ProxyPass http://remote-server.com:8000/api/
   #Header add "Access-Control-Allow-Origin" "*"
   Header add "Access-Control-Allow-Origin" "http://symfony.cent-dev.local"
</LocationMatch>

Note: You need to enable mod_proxy and mod_headers.

Reference:

http://www.html5rocks.com/en/tutorials/cors/
http://www.html5rocks.com/en/tutorials/file/xhr2/#toc-cors
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
http://www.andlabs.org/html5.html
https://code.google.com/p/html5security/wiki/CrossOriginRequestSecurity

Thursday, December 7, 2017

How to access the service in a custom console command?

How to access the service in a custom console command?

You can use Dependency Injection in commands with ease since Symfony 3.3 (May 2017).

Use PSR-4 services autodiscovery in the services.yml:

services:
    _defaults:
        autowire: true

    App\Command\:
        resource: ../Command

Then use common Constructor Injection and finally even Commands will have clean architecture:

final class MyCommand extends Command
{
    /**
     * @var SomeDependency
     */
    private $someDependency;

    public function __construct(SomeDependency $someDependency)
    {
        $this->someDependency = $someDependency;

        // this is required due to parent constructor, which sets up name 
        parent::__construct(); 
    }
}

Alternative method is to extend Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand:

use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;

class MyCommand extends ContainerAwareCommand
{
    protected function execute(InputInterface $input, OutputInterface $output)
    {
        $em = $this->getContainer()->get('doctrine')->getEntityManager();
    }
}

Reference:

https://stackoverflow.com/questions/19321760/symfony2-how-to-access-the-service-in-a-custom-console-command

https://www.tomasvotruba.cz/blog/2017/05/07/how-to-refactor-to-new-dependency-injection-features-in-symfony-3-3/#4-use-psr-4-based-service-autodiscovery-and-registration

http://symfony.com/blog/new-in-symfony-3-4-lazy-commands

Saturday, November 18, 2017

Jet Profiler for MySQL

Jet Profiler for MySQL

is a real-time query performance and diagnostics tool for the MySQL database server. It's core features:

- Query, table and user performance
- Graphical visualisation
- Low overhead
- User friendly

https://www.jetprofiler.com/

Saturday, November 11, 2017

Theme the node edit form of the custom content type in Drupal 8

Theme the node edit form of the custom content type in Drupal 8

/**
* Implements hook_theme_suggestions_alter().
*/
function MyModuleName_theme_suggestions_alter(array &$suggestions, array $variables, $hook) {
  if ($hook == 'node_edit_form') {
    if ($node = \Drupal::routeMatch()->getParameter('node')) {
      $content_type = $node->bundle();
    } else {
      $current_path = \Drupal::service('path.current')->getPath();
      $path_args = explode('/', $current_path);
      $content_type = $path_args[3];
    }
    $suggestions[] = 'node_edit_form__' . $content_type; // Note: You can also specify a custom theme ID here. See hook_theme() below.
  }
}

The following code (hook_theme) is optional:

/**
 * Implements  hook_theme($existing, $type, $theme, $path)
 */
function mydemo_theme() {
  return [
    'my_custom_theme_id' => [
      'render element' => 'form',
      #'path' => '/var/www/html/drupal8/web/modules/custom/mydemo/templates',
      'template' => 'asdf222',
    ],
  ];
}

Next, create twig templates in your theme's template directory in the form of node-edit-form--NODE-TYPE-SEPARATED-WITH-DASHES.html.twig:


<b>Test</b>

{{ form.form_id }}
{{ form.form_token }}

{{ form }}

Reference:

https://www.drupal.org/forum/support/post-installation/2015-10-31/drupal-8-node-edit-template

https://api.drupal.org/api/drupal/core%21lib%21Drupal%21Core%21Render%21theme.api.php/function/hook_theme_suggestions_HOOK_alter/8.2.x

https://drupal.stackexchange.com/questions/200602/override-theme-template-from-module-without-implementing-a-theme

Sunday, October 15, 2017

Saturday, October 14, 2017

MySQL resolve Host name to IP Address or Vice Versa

MySQL resolve Host name to IP Address or Vice Versa

# resolveip 192.168.5.11

# resolveip mysql-slave

Reference:

https://dev.mysql.com/doc/refman/5.7/en/resolveip.html

Monday, October 9, 2017

Go at Google: Language Design in the Service of Software Engineering

Go at Google: Language Design in the Service of Software Engineering

https://talks.golang.org/2012/splash.article

Good C string library

Good C string library

http://site.icu-project.org/

Reference:

https://stackoverflow.com/questions/4688041/good-c-string-library

Content-Length not sent when gzip compression enabled in Apache?

Content-Length not sent when gzip compression enabled in Apache?

Philippe: "Apache uses chunked encoding only if the compressed file size is larger than the DeflateBufferSize. Increasing this buffer size will therefore prevent the server using chunked encoding also for larger files, causing the Content-Length to be sent even for zipped data."

Reference:

https://serverfault.com/questions/183843/content-length-not-sent-when-gzip-compression-enabled-in-apache/183856#183856

https://stackoverflow.com/questions/2287950/how-do-you-set-the-correct-content-length-header-when-the-webserver-automaticall

Sunday, October 8, 2017

Jet Profiler for MySQL

Jet Profiler for MySQL

is a real-time query performance and diagnostics tool for the MySQL database server. It's core features:

Query, table and user performance
Graphical visualisation
Low overhead
User friendly

Reference:

https://www.jetprofiler.com/

InnoDB memory usage buffer pool status

InnoDB memory usage buffer pool status

Making sense of INNODB buffer pool stats

After having read this page in the mysql documentation, I tried to make sense of our current InnoDB usage. Currently, we allocate 6GB of RAM for the buffer pool. Our database size is about the same. Here's the output from show engine innodb status\G (we're running v5.5)

----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 6593445888; in additional pool allocated 0
Dictionary memory allocated 1758417
Buffer pool size   393215
Free buffers       853
Database pages     360515
Old database pages 133060
Modified db pages  300
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 7365790, not young 23099457
0.00 youngs/s, 0.00 non-youngs/s
Pages read 1094342, created 185628, written 543182148
0.00 reads/s, 0.00 creates/s, 37.32 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 360515, unzip_LRU len: 0
I/O sum[2571]:cur[0], unzip sum[0]:cur[0]

I wanted to know how well we're utilizing the buffer cache. After initially glancing at the output, it appeared that we are indeed using it, based off of the Pages made young and not young have numbers in them and Buffer pool hit rate is 1000 / 10000 (which I saw elsewhere on the web that this means it's being used pretty heavily. True?)

What's throwing me through a loop is why the young-making rate and not are both at 0/1000 and the young/s and non-young/s accesses are both at 0. Those would all indicate that it's not being used at all, right?

Can anyone help make sense of this?

This is in pages not bytes:

The Buffer pool size   393215

To see the Buffer Pool size in GB run this:

Note: As of MySQL 5.7.6 the "information_schema" is merged into performance_schema. So just change "information_schema" to "performance_schema" in the query to make it work.

SELECT FORMAT(BufferPoolPages*PageSize/POWER(1024,3),2) BufferPoolDataGB FROM
(SELECT variable_value BufferPoolPages FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_total') A,
(SELECT variable_value PageSize FROM information_schema.global_status
WHERE variable_name = 'Innodb_page_size') B;

This is the number of pages with data inside the Buffer Pool:

Database pages     360515

To see the amount of data in the Buffer Pool size in GB run this:

SELECT FORMAT(BufferPoolPages*PageSize/POWER(1024,3),2) BufferPoolDataGB FROM
(SELECT variable_value BufferPoolPages FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_data') A,
(SELECT variable_value PageSize FROM information_schema.global_status
WHERE variable_name = 'Innodb_page_size') B;

To see the percentage of the Buffer Pool in use, run this:

SELECT CONCAT(FORMAT(DataPages*100.0/TotalPages,2),' %') BufferPoolDataPercentage FROM
(SELECT variable_value DataPages FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_data') A,
(SELECT variable_value TotalPages FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_total') B;

This is the number of pages in the Buffer Pool that have to be written back to the database. They are also referred to as dirty pages:

Modified db pages  300

To see the Space Taken Up by Dirty Pages, run this:

SELECT FORMAT(DirtyPages*PageSize/POWER(1024,3),2) BufferPoolDirtyGB FROM
(SELECT variable_value DirtyPages FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_dirty') A,
(SELECT variable_value PageSize FROM information_schema.global_status
WHERE variable_name = 'Innodb_page_size') B;

To see the Percentage of Dirty Pages, run this:

SELECT CONCAT(FORMAT(DirtyPages*100.0/TotalPages,2),' %') BufferPoolDirtyPercentage FROM
(SELECT variable_value DirtyPages FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_dirty') A,
(SELECT variable_value TotalPages FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_total') B;

As for the other things in the display, run this:

SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool%';

You'll see all the status variables for the Buffer Pool. ou can apply the same queries against whatever you need to examine.

The buffer pool is divided into two part, a young list and a not-young list. The making rate shows how many pages in the buffer pools are being shuffled between the two lists.

Pages made young are not-young pages being made (i.e. being read out of the cache. Pages made not-young are pages moved from the young list because either they are too old, or because the young list is full.

The rate at pages are moved between the two depends upon how much of the buffer pool is currently being used vs the size of the young pool. Set at zero means your active set (the pages you are using) is smaller than the young pool.

Reference:

http://dba.stackexchange.com/questions/56494/making-sense-of-innodb-buffer-pool-stats

http://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool.html

How much memory do I need for InnoDB buffer pool?

How much memory do I need for InnoDB buffer pool?

The following query gives you the recommended InnoDB buffer pool size based on all InnoDB Data and Indexes with an additional 60%:

SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) AS RIBPS_GB FROM
(SELECT SUM(data_length+index_length) Total_InnoDB_Bytes
FROM information_schema.tables WHERE engine='InnoDB') A;

+-------+
| RIBPS |
+-------+
|     8 |
+-------+

With this output, you would set the following in /etc/my.cnf:

[mysqld]
innodb_buffer_pool_size=8G

After few days, run this query to see the actualy GB of memory in use in the InnoDB buffer pool:

SELECT (PagesData*PageSize)/POWER(1024,3) DataGB FROM
(SELECT variable_value PagesData
FROM performance_schema.global_status
WHERE variable_name='Innodb_buffer_pool_pages_data') A,
(SELECT variable_value PageSize
FROM performance_schema.global_status
WHERE variable_name='Innodb_page_size') B;

You need buffer pool a bit (say 10%) larger than your data (total size of Innodb TableSpaces):

If you want to accommodate an addition 10%, plus account for 25% increase in data and indexes over time, the following query will produce exactly what you need to set innodb_buffer_pool_size in /etc/mysql/mysql.conf.d/mysqld.cnf:

SET @growth = 1.25;

SELECT
 CONCAT(
  CEILING(RIBPS / POWER(1024, pw)),
  SUBSTR(' KMGT', pw + 1, 1)
 ) Recommended_InnoDB_Buffer_Pool_Size
FROM
 (
  SELECT
   RIBPS,
   FLOOR(LOG(RIBPS) / LOG(1024)) pw
  FROM
   (
    SELECT
     SUM(data_length + index_length) * 1.1 * @growth AS RIBPS
    FROM
     information_schema. TABLES AAA
    WHERE
     ENGINE = 'InnoDB'
    GROUP BY
     ENGINE
   ) AA
 ) A;

Reference:

http://blog.ijun.org/2016/02/innodb-memory-usage-buffer-pool-status.html

https://dba.stackexchange.com/questions/27328/how-large-should-be-mysql-innodb-buffer-pool-size

https://dba.stackexchange.com/questions/125164/information-schema-global-variables-alternative-in-5-7-more-info-about-show-com

http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/

Disable TLS 1.0 and 1.1 in Apache 2.4

Disable TLS 1.0 and 1.1 in Apache 2.4:

# vim /etc/apache2/mods-available/ssl.conf

SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1

Verify if TLS 1.0 and 1.1 are supported:

# openssl s_client -connect mydomain.com:443 -tls1

# openssl s_client -connect mydomain.com:443 -tls1_1

Note: If you get the certificate chain and the handshake you know the system in question supports TLS 1.1.

Reference:

https://serverfault.com/questions/638691/how-can-i-verify-if-tls-1-2-is-supported-on-a-remote-web-server-from-the-rhel-ce

When checking Apache's gzip deflate compression, I realized Apache was sending "Transfer-Encoding: chunked" and not sending the "Content-Length" header

When checking Apache's gzip deflate compression, I realized Apache was sending "Transfer-Encoding: chunked" and not sending the "Content-Length" header

Andy: "Chunked output occurs when Apache doesn't know the total output size before sending, as is the case with compressed transfer (Apache compresses data into chunks when they reach a certain size, then despatches them to the browser/requester while the script is still executing). You could be seeing this because you have mod_deflate or mod_gzip active.

You can disable mod_deflate per file like so (more here)

    SetEnvIfNoCase Request_URI get_file\.php$ no-gzip dont-vary

It's best left on in general as it greatly increases the speed of data transfer.
"

Reference:

https://serverfault.com/questions/59047/apache-sending-transfer-encoding-chunked

Friday, October 6, 2017

Manually obtaining TLS/SSL certificates from Let's Encrypt

Official build of EFF's Certbot tool for obtaining TLS/SSL certificates from Let's Encrypt.

https://hub.docker.com/r/certbot/certbot/

# certbot certonly --manual --preferred-challenges http --email me@example.com -d mydomain.com

Saturday, September 30, 2017

Profiling and optimizing Go web applications

Profiling and optimizing Go web applications

Install Graphviz for generating a PDF file:

# apt-get install graphviz

Use hey as the benchmark tool:

# go get -u github.com/rakyll/hey
# hey -n 100000 -c 10 http://localhost:8080

Sample code:

package main

import (
 "fmt"
 "log"
 "net/http"
 _ "net/http/pprof" // here be dragons
)

func main() {
 http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
  fmt.Fprintf(w, "Hello World!")
 })
 log.Fatal(http.ListenAndServe(":8080", nil))
}

If your web application is using a custom mux (HTTP request multiplexer), you will need to register a few pprof HTTP endpoints manually:

package main

import (
    "net/http"
    "net/http/pprof"
)

func hiHandler(w http.ResponseWriter, r *http.Request) {
    w.Write([]byte("hi"))
}

func main() {
    r := http.NewServeMux()
    r.HandleFunc("/", hiHandler)

    // Register pprof handlers
    r.HandleFunc("/debug/pprof/", pprof.Index)
    r.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
    r.HandleFunc("/debug/pprof/profile", pprof.Profile)
    r.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
    r.HandleFunc("/debug/pprof/trace", pprof.Trace)

    http.ListenAndServe(":8080", r)
}

CPU profile:

http://localhost:8080/debug/pprof/profile

Memory profile:

http://localhost:8080/debug/pprof/heap

Goroutine blocking profile:

http://localhost:8080/debug/pprof/block

To look at the holders of contended mutexes, after calling runtime.SetMutexProfileFraction in your program:

http://localhost:8080/debug/pprof/mutex

All goroutines with stack traces:

http://localhost:8080/debug/pprof/goroutine?debug=1

Take a trace:

http://localhost:8080/debug/pprof/trace

To view all available profiles, open:

http://localhost:8080/debug/pprof/

To look at a 30-second CPU profile:

# go tool pprof http://localhost:8080/debug/pprof/profile?seconds=30
(pprof) top
(pprof) web
(pprof) exit

Note: Run go tool pprof -h 2>&1 | less for more information.

# go tool pprof -text http://localhost:8080/debug/pprof/profile?seconds=10 | tee cpu.txt

Note: output to a text file.

# go tool pprof -pdf http://localhost:8080/debug/pprof/profile?seconds=10 > cpu.pdf

Note: output to a PDF file.

# go tool pprof -tree http://localhost:8080/debug/pprof/profile?seconds=10 > cpu.txt

Note: Outputs a text rendering of call graph.

# go tool pprof -web http://localhost:8080/debug/pprof/profile?seconds=10

Note: Visualize graph through web browser.

To collect a 5-second execution trace:

# curl -o trace.out http://192.168.1.1:8080/debug/pprof/trace?seconds=10

# go tool trace -http="127.0.0.1:6060" trace.out

Note: You can run these two commands above on a client machine (e.g., Windows)

Reference:

https://golang.org/pkg/net/http/pprof/

https://blog.golang.org/2011/06/profiling-go-programs.html

http://artem.krylysov.com/blog/2017/03/13/profiling-and-optimizing-go-web-applications/

http://mmcloughlin.com/posts/your-pprof-is-showing

https://github.com/zmap

http://blog.ralch.com/tutorial/golang-performance-and-memory-analysis/

Saturday, September 16, 2017

Slice chunking in Go

Slice chunking in Go

package main

import "fmt"

var (
 logs   = []string{"a", "b", "c", "d", "e", "f", "g", "h", "i", "j"}
 numCPU = 3
)

func main() {

 var divided [][]string

 chunkSize := (len(logs) + numCPU - 1) / numCPU

 for i := 0; i < len(logs); i += chunkSize {
  end := i + chunkSize

  if end > len(logs) {
   end = len(logs)
  }

  divided = append(divided, logs[i:end])
 }

 fmt.Printf("%#v\n", divided)
}

Reference:

https://stackoverflow.com/questions/35179656/slice-chunking-in-go

Saturday, September 9, 2017

Why is a Goroutine’s stack infinite high cpu spikes:

Why is a Goroutine’s stack infinite high cpu spikes:

Empty loop:

for{
}

uses 100% of a CPU Core.

The proper way to wait for some operation depending to the use case you may use:

- sync.WaitGroup like this
- select {}
- channels
- time.Sleep
- time.After

One of the key features of Goroutines is their cost; they are cheap to create in terms of initial memory footprint (as opposed to the 1 to 8 megabytes with a traditional POSIX thread) and their stack grows and shrinks as necessary. This allows a Goroutine to start with a single 4096 byte stack which grows and shrinks as needed without the risk of ever running out.
There is however one detail I have withheld until now, which links the accidental use of a recursive function to a serious case of memory exhaustion for your operating system, and that is, when new stack pages are needed, they are allocated from the heap.
As your infinite function continues to call itself, new stack pages are allocated from the heap, permitting the function to continue to call itself over and over again. Fairly quickly the size of the heap will exceed the amount of free physical memory in your machine, at which point swapping will soon make your machine unusable.
The size of the heap available to Go programs depends on a lot of things, including the architecture of your CPU and your operating system, but it generally represents an amount of memory that exceeds the physical memory of your machine, so your machine is likely to swap heavily before your program ever exhausts its heap.

Reference:

https://stackoverflow.com/questions/39493692/difference-between-the-main-goroutine-and-spawned-goroutines-of-a-go-program

http://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite

Trouble reading packets from a socket in Go

Trouble reading packets from a socket in Go

buf needs to have a definite size. 0-length slice won't work.

Declare it as:

var buf = make([]byte, 1024)

func handleClient(conn net.Conn) {
        defer conn.Close()

        var buf [512]byte

        for {
                n, err := conn.Read(buf[0:])

                if err != nil {
                        fmt.Printf("%v\n", err.Error())
                        return
                }

                fmt.Printf("Got: %d, %s\n", n, buf)

                str := strings.Repeat("A", 16000000)
                _, err2 := conn.Write([]byte(str))

                if err2 != nil {
                        fmt.Printf("%v\n", err.Error())
                        return
                }
        }
}

Reference:

https://stackoverflow.com/questions/2270670/trouble-reading-from-a-socket-in-go

Why can not I copy a slice with copy in golang?

Why can not I copy a slice with copy in golang?

The builtin copy(dst, src) copies min(len(dst), len(src)) elements.

Copy returns the number of elements copied, which will be the minimum of len(src) and len(dst).

So if your dst is empty (len(dst) == 0), nothing will be copied.

Try:

tmp := make([]int, len(arr))

Reference:

https://stackoverflow.com/questions/30182538/why-can-not-i-copy-a-slice-with-copy-in-golang

https://golang.org/ref/spec#Appending_and_copying_slices

docker exec is not working in cron

docker exec is not working in cron

The docker exec command says it needs "pseudo terminal and runs in interactive mode" (-it flags) while cron doesn't attach to any TTYs.

Try using the -d flag instead of -it:

# /bin/sh -c "/usr/bin/docker exec -u root -d exp_mongo_1 /bin/sh /backup.sh"

Reference:

https://stackoverflow.com/questions/37089033/docker-exec-is-not-working-in-cron

Statically compiled Go programs, always, even with cgo, using musl

Statically compiled Go programs, always, even with cgo, using musl

Reference:

https://dominik.honnef.co/posts/2015/06/go-musl/

Friday, September 8, 2017

Generic way to duplicate Go slices?

Generic way to duplicate Go slices?

One simple statement to make a shallow copy of a slice:

b := append([]T(nil), a...)

Note: The append to a nil slice translates into a make and copy.

Which is equivalent to:

b := make([]T, len(a))
copy(b, a)

Reference:

https://stackoverflow.com/questions/26433156/generic-way-to-duplicate-go-slices

cannot assign to struct field in map error

cannot assign to struct field in map error

data["p1"] isn't quite a regular addressable value: hashmaps can grow at runtime, and then their values get moved around in memory, and the old locations become outdated. If values in maps were treated as regular addressable values, those internals of the map implementation would get exposed.

So, instead, data["p1"] is a slightly different thing called a "map index expression" in the spec; if you search the spec for the phrase "index expression" you'll see you can do certain things with them, like read them, assign to them, and use them in increment/decrement expressions (for numeric types). But you can't do everything. They could have chosen to implement more special cases than they did, but I'm guessing they didn't just to keep things simple.

Issue code:

package main

import (
        "fmt"
)

type Person struct {
        Name string
}

func main() {
        data := map[string]Person{
                "p1": Person{},
        }

        data["p1"].Name = "Jun"

        fmt.Printf("%v\n", data)
}

Solution:

The solution is to make the map value a regular old pointer.

package main

import (
        "encoding/json"
        "fmt"
)

type Person struct {
        Name string
}

func main() {
        data := map[string]*Person{
                "p1": &Person{},
        }

        (*data["p1"]).Name = "Jun" // Or simply data["p1"].Name = "Jun"

        dataJSON, _ := json.MarshalIndent(data, "", "  ")
        fmt.Printf("%s\n", dataJSON)
}

Reference:

https://stackoverflow.com/questions/32751537/why-do-i-get-a-cannot-assign-error-when-setting-value-to-a-struct-as-a-value-i

Print line number for debugging in Go

Print line number for debugging in Go

Method 1:

// to change the flags on the default logger
log.SetFlags(log.LstdFlags | log.Lshortfile)

Method 2:

package main

import (
        "log"
        "runtime"
)

func MyFunc() {
        FancyHandleError()
}

func FancyHandleError() {
        // notice that we're using 1, so it will actually log the where
        // the error happened, 0 = this function, we don't want that.
        pc, fn, line, _ := runtime.Caller(1)
        log.Printf("[error] in %s[%s:%d]", runtime.FuncForPC(pc).Name(), fn, line)
}

func main() {
        MyFunc()
}

Method 3:

package main

import (
        //"log"
        //"runtime"
        "runtime/debug"
)

func MyFunc() {
        FancyHandleError()
}

func FancyHandleError() {
        debug.PrintStack()
}

func main() {
        MyFunc()
}

Reference:

https://golang.org/pkg/log/#pkg-constants

https://golang.org/pkg/runtime/debug/#PrintStack

http://stackoverflow.com/questions/24809287/how-do-you-get-a-golang-program-to-print-the-line-number-of-the-error-it-just-ca

Tuesday, July 25, 2017

Drupal 8 note

Drupal 8 note

Modules

Devel
https://www.drupal.org/project/devel

Examples for Developers
https://www.drupal.org/project/examples

Drush
https://www.drupal.org/project/drush

Drupal Console
https://drupalconsole.com/
===
# svn propedit svn:ignore /www/drupal8_public/sites

sites.php
*.com
*.ca
*.cn
*.local
===
# curl -sS https://getcomposer.org/installer | php
# mv composer.phar /usr/local/bin/composer

# composer self-update
# composer update
===
Install Drupal 8

# DRUPAL_VERSION=8.2.4; export DRUPAL_VERSION
# DRUPAL_MD5=288aa9978b5027e26f20df93b6295f6c; export DRUPAL_MD5
# curl -fSL "https://ftp.drupal.org/files/projects/drupal-${DRUPAL_VERSION}.tar.gz" -o drupal.tar.gz \
&& echo "${DRUPAL_MD5} *drupal.tar.gz" | md5sum -c - \
&& tar -xz --strip-components=1 -f drupal.tar.gz \
&& rm drupal.tar.gz \
&& chown -R www-data:www-data sites modules themes

# unset DRUPAL_VERSION
# unset DRUPAL_MD5
# env
===
Drush

# wget http://files.drush.org/drush.phar
# php drush.phar core-status
# chmod +x drush.phar
# mv drush.phar /usr/local/bin/drush
# chown root:wheel /usr/local/bin/drush

# drush help

# drush pm-download drupal-8 --select --destination=/www/drupal8 --drupal-project-rename=drupal8
# cd example
# mkdir modules/contrib
# mkdir modules/custom

# drush site-install minimal --account-mail=admin@example.com --account-name=admin --account-pass=admin --site-mail=admin@example.com --site-name=MySiteName --sites-subdir=mytest.local --db-url='mysql://[db_user]:[db_pass]@localhost/[db_name]'

# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local core-status
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-releases
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-projectinfo
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-info

# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-refresh
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-updatestatus
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-updatecode
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local updatedb-status
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local updatedb

# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-download devel --destination=modules/contrib
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-projectinfo devel
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-info devel
# drush --root=/www/test_dru8/example --uri=simplestore.cent-exp.local pm-enable devel

# drush --root=/www/drupal-8.0.5 --uri=simplestore.cent-exp.local cache-rebuild

# drush --root=/www/drupal-8.0.5 --uri=simplestore.cent-exp.local php-script test.php
# drush --root=/www/drupal-8.0.5 --uri=simplestore.cent-exp.local --debug --verbose php-script test.php
===
# drupal init --override
# drupal check

# drupal list
# drupal self-update

# drupal generate:module
===
Disable Drupal 8 caching during development

# cp sites/example.settings.local.php sites/simplestore.cent-exp.local/settings.local.php

Uncomment these lines:

# vim sites/simplestore.cent-exp.local/settings.php

if (file_exists(__DIR__ . '/settings.local.php')) {
include __DIR__ . '/settings.local.php';
}

Uncomment these lines in settings.local.php to Disable the render cache and Disable Dynamic Page Cache:

# vim sites/simplestore.cent-exp.local/settings.local.php

$settings['cache']['bins']['render'] = 'cache.backend.null';
$settings['cache']['bins']['dynamic_page_cache'] = 'cache.backend.null';

Open development.services.yml in the sites folder and add the following block (to disable twig cache):

# vim sites/development.services.yml

parameters:
twig.config:
debug : true
auto_reload: true
cache: false

Afterwards you have to rebuild the Drupal cache. Otherwise your website will encounter an unexpected error on page reload. This can be done by visiting the following URL from your Drupal 8 website:
http://yoursite/core/rebuild.php

https://www.drupal.org/node/2598914

===
Drupal Console - a CLI tool to generate boilerplate code, interact and debug Drupal 8.
https://drupalconsole.com/

Evaluate Drupal projects online
http://simplytest.me/project/examples/8.x-1.x

===
Generates a token based on $value, the user session, and the private key:
\Drupal::csrfToken()->get()
\Drupal::csrfToken()->validate()

===
1. find the proper way to deal with database query error during form_submit try .. catch (rollback)

2. db_transaction

https://api.drupal.org/api/drupal/core%21includes%21database.inc/function/db_transaction/8
http://dcycleproject.org/blog/27/dont-perform-logic-your-hookformsubmit-use-api

===
vim *.yml src/*/*.php templates/*.twig css/*.css js/*.js
===
$outArr = [];
$sql = "SELECT name FROM {variable} WHERE name LIKE :name";
$arg = [
':name' => db_like('csm_node_temp_') . '%',
];

foreach (db_query($sql, $arg) as $obj) {
$outArr[] = $obj->name;
}
===
Goodbye Drush Make, Hello Composer!
https://www.lullabot.com/articles/goodbye-drush-make-hello-composer
===
files/config_*/sync

This directory contains configuration to be imported into your Drupal site. To make this configuration active, visit admin/config/development/configuration/sync. For information about deploying configuration between servers, see https://www.drupal.org/documentation/administer/config
===
Working With Twig Templates

https://www.drupal.org/node/2186401
https://www.drupal.org/node/2354645

Sunday, July 23, 2017

Execute mongo commands through shell scripts

Method 1:

# mongo DBName --quiet --eval 'db.users.find().pretty()' | less

Method 2:

# echo -e 'use DBName\ndb.users.find().pretty()' | mongo --quiet

Monday, July 10, 2017

Build OpenJDK 8 on Ubuntu 14.04

# apt-get update \
&& apt-get install build-essential mercurial zip openjdk-7-jdk libX11-dev libxext-dev libxrender-dev libxtst-dev libxt-dev libcups2-dev libfreetype6-dev libasound2-dev ccache

# cd /usr/local/src \
&& hg clone http://hg.openjdk.java.net/jdk8u/jdk8u

# cd jdk8u/ \
&& bash ./get_source.sh

# bash ./configure -with-freetype-include=/usr/include/freetype2 -with-freetype-lib=/usr/lib/x86_64-linux-gnu/

# make all

# make install

Thursday, July 6, 2017

PHP http query

<?php

$header = "Content-type: application/json\r\n"
. "Content-Length: " . strlen($data) . "\r\n"
;


$context = stream_context_create([
        'http' => [
                'method'  => 'POST',
                'ignore_errors' => true,
                'header'=> $header,
                'content' => $data,
        ],
        'ssl' => [
                // set some SSL/TLS specific options
                'verify_peer' => false,
                'verify_peer_name' => false,
                'allow_self_signed' => true
        ],
]);

$result = file_get_contents('http://example.com/', false, $context);

Sunday, June 18, 2017

Install and Configure Elasticsearch on Ubuntu 16.04

Install and Configure Elasticsearch on Ubuntu 16.04

Installing the Oracle JDK:

# add-apt-repository ppa:webupd8team/java
# apt-get update
# apt-get install oracle-java9-installer

Multiple Java installations can be installed on one server. You can use the following command to configure which version is the default for use:

# update-alternatives --config java

Install Elasticsearch:

# wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
# apt-get install apt-transport-https
# echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

# apt-get update && apt-get install elasticsearch

# systemctl daemon-reload
# systemctl enable elasticsearch.service
# systemctl start elasticsearch.service

# vim ~/.bashrc

export JAVA_HOME="/usr/lib/jvm/java-9-oracle"

# echo $JAVA_HOME

/usr/lib/jvm/java-9-oracle

To tail the journal:

# journalctl -f

To list journal entries for the elasticsearch service:

# journalctl --unit elasticsearch

To list journal entries for the elasticsearch service starting from a given time:

# journalctl --unit elasticsearch --since "2017-06-17 15:09:11"

To test Elasticsearch is running:

# curl http://localhost:9200/

Configure Elasticsearch:

# cd /etc/elasticsearch

To check the cluster health:

# curl -X GET 'localhost:9200/_cat/health?v&pretty'

Get a list of nodes in the cluster:

# curl -X GET 'localhost:9200/_cat/nodes?v'

Create the index named "customer":

# curl -X PUT 'localhost:9200/customer?pretty'

List all indices:

# curl -X GET 'localhost:9200/_cat/indices?v'

Add a customer document into the customer index, "external type, with an ID of 1:

# curl -X PUT 'localhost:9200/customer/external/1?pretty&pretty' -H 'Content-Type: application/json' -d'
{
"name": "John Doe"
}
'

Retrieve the document we just added:

# curl -X GET 'localhost:9200/customer/external/1?pretty&pretty'

Replace the document:

# curl -X PUT 'localhost:9200/customer/external/1?pretty&pretty' -H 'Content-Type: application/json' -d'
{
"name": "Jane Doe"
}
'

Update the document:

# curl -X POST 'localhost:9200/customer/external/1/_update?pretty&pretty' -H 'Content-Type: application/json' -d'
{
"doc": { "name": "Jane Doe", "age": 20 }
}
'

Updates can also be performed by using simple scripts:

# curl -X POST 'localhost:9200/customer/external/1/_update?pretty&pretty' -H 'Content-Type: application/json' -d'
{
"script" : "ctx._source.age += 5"
}
'

Note: ctx._source refers to the current source document that is about to be updated.

Bulk insert the multiple documents:

# curl -X POST 'localhost:9200/customer/external/_bulk?pretty&pretty' -H 'Content-Type: application/json' -d'
{"index":{"_id":"1"}}
{"name": "John Doe 3" }
{"index":{"_id":"2"}}
{"name": "Jane Doe 3" }
'

Note: the existing documents will be replaced instead of updated.

Update the first document and delete the second document in one bulk operation:

# curl -X POST 'localhost:9200/customer/external/_bulk?pretty&pretty' -H 'Content-Type: application/json' -d'
{"update":{"_id":"1"}}
{"doc": { "name": "John Doe becomes Jane Doe" } }
{"delete":{"_id":"2"}}
'

Delete a document:

# curl -X DELETE 'localhost:9200/customer/external/1?pretty&pretty'

Delete the customer index:

# curl -X DELETE 'localhost:9200/customer?pretty&pretty'

Load data into the cluster:

# curl -H "Content-Type: application/json" -XPOST 'localhost:9200/bank/account/_bulk?pretty&refresh' --data-binary "@accounts.json"
# curl 'localhost:9200/_cat/indices?v'

Note: the sample data can be generated from http://www.json-generator.com/

Query all documents in the index:

# curl -X GET 'localhost:9200/bank/_search?q=*&sort=account_number:asc&pretty&pretty'

Alternative query method:

# curl -X GET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_all": {} },
"sort": [
{ "account_number": "asc" }
]
}
'

Returns all accounts containing the term "mill" or "lane" in the address:

# curl -X GET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match": { "address": "mill lane" } }
}
'

Returns all accounts containing the phrase "mill lane" in the address:

# curl -X GET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_phrase": { "address": "mill lane" } }
}
'

Returns all accounts containing "mill" and "lane" in the address:

# curl -X GET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"bool": {
"must": [
{ "match": { "address": "mill" } },
{ "match": { "address": "lane" } }
]
}
}
}
'

Returns all accounts containing "mill" or "lane" in the address:

# curl -X GET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"bool": {
"should": [
{ "match": { "address": "mill" } },
{ "match": { "address": "lane" } }
]
}
}
}
'

Returns all accounts that contain neither "mill" nor "lane" in the address:

# curl -X GET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"bool": {
"must_not": [
{ "match": { "address": "mill" } },
{ "match": { "address": "lane" } }
]
}
}
}
'

Reference:

https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html

https://www.elastic.co/guide/en/elasticsearch/reference/current/_delete_an_index.html

https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-16-04

Thursday, May 25, 2017

Redirect both stdout and stderr to a file:

Redirect both stdout and stderr to a file:
# ls &> filename

Note: This operator is now functional, as of Bash 4, final release.

Redirect stderr to stdout (&1), and then redirect stdout to a file:
# ls > filename 2>&1

Note: 2>&1 redirects file descriptor 2 (stderr) to file descriptor 1 (stdout).

Reference:

https://stackoverflow.com/questions/7526971/how-to-redirect-both-stdout-and-stderr-to-a-file

Wednesday, May 17, 2017

gpg: agent_genkey failed: Permission denied

$ gpg2 --gen-key

// On Ubuntu
gpg: agent_genkey failed: Permission denied
Key generation failed: Permission denied

// On CentOS
gpg: cancelled by user
gpg: Key generation canceled.

Solution:

$ ls -la $(tty)

crw--w----. 1 someone tty 136, 9 May 17 20:47 /dev/pts/9

$ sudo chown MyUserName /dev/pts/9

$ gpg2 --gen-key

Monday, May 15, 2017

sign_and_send_pubkey: signing failed: agent refused operation

sign_and_send_pubkey: signing failed: agent refused operation

Try to add the private key identities to the authentication agent:

# ssh-add

To see a list of fingerprints of all identities:

# ssh-add -l

Reference:

https://askubuntu.com/questions/762541/ubuntu-16-04-ssh-sign-and-send-pubkey-signing-failed-agent-refused-operation

Friday, May 12, 2017

Use rpmbuild to build a custom RPM package on CentOS 7

Use rpmbuild to build a custom RPM package on CentOS 7

Install the necessary tools:

# yum install rpm-build rpmdevtools rpmlint

Create the necessary directories:

# rpmdev-setuptree

// or create them manually
# mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS,tmp}

# ls -l ~/rpmbuild

Set up the temporary directory:

# vim ~/.rpmmacros

%_tmppath %(echo $HOME)/rpmbuild/tmp

Prepare some program/files for the RPM package:

# cd ~/rpmbuild
# mkdir -p ~/rpmbuild/SOURCES/test-repo-1.0.0
# echo -e '#!/bin/sh\necho "Hello World"' > ~/rpmbuild/SOURCES/test-repo-1.0.0/hello
# chmod +x ~/rpmbuild/SOURCES/test-repo-1.0.0/hello
# tar zcvf ~/rpmbuild/SOURCES/test-repo-1.0.0.tar.gz -C ~/rpmbuild/SOURCES/ test-repo-1.0.0/

Edit the spec file:

# vim ~/rpmbuild/SPECS/test-repo.spec

%define NAME test-repo
%define VERSION 1.0.0
%define INSTALL_DIR /usr/local
%define OWNER root

# Disable creating debuginfo RPM
%define debug_package %{nil}

# Strip debug symbols (possibly making the program not funtion properly)
#%define __strip /bin/true

Name: %NAME
Version: %VERSION

# Refer to:
# https://fedoraproject.org/wiki/How_to_create_an_RPM_package
# https://fedoraproject.org/wiki/Packaging:DistTag?rd=Packaging/DistTag
Release: 1%{?dist}

Summary: Package foo summary
Source: %NAME-%VERSION.tar.gz
License: MIT

# Refer to:
# cat /usr/share/doc/rpm-4.11.3/GROUPS
Group: Development/Tools

%description
Package foo description.

%prep

# Start uncompressing
%setup -q

%build

%install

# Turn this on to find out the available environment variables.
#env

install -m 0755 -d ${RPM_BUILD_ROOT}%INSTALL_DIR/%NAME
#mkdir -p ${RPM_BUILD_ROOT}%INSTALL_DIR/%NAME
cp -r * ${RPM_BUILD_ROOT}%INSTALL_DIR/%NAME/

#make install DESTDIR=$RPM_BUILD_ROOT

%clean
rm -rf ${RPM_BUILD_DIR}/*
rm -rf ${RPM_BUILD_ROOT}
rm -rf %_tmppath/*

%post
echo . .
echo .Wring some descripton here to show after package installation!.

%files
%INSTALL_DIR
%defattr(-,%OWNER,%OWNER)

%changelog

Validate the spec:

# cd ~/rpmbuild
# rpmlint SPECS/test-repo.spec

SPECS/test-repo.spec:10: W: macro-in-comment %define
SPECS/test-repo.spec:42: W: macro-in-comment %INSTALL_DIR
SPECS/test-repo.spec:42: W: macro-in-comment %NAME
SPECS/test-repo.spec: W: invalid-url Source0: test-repo-1.0.0.tar.gz
0 packages and 1 specfiles checked; 0 errors, 4 warnings.

Note: Do not worry if you see the warnings.

Build RPM without the source:

# cd ~/rpmbuild
# rpmbuild -v -bb SPECS/test-repo.spec

Build RPM with the source:

# rpmbuild -v -ba SPECS/test-repo.spec

Install the RPM:

# rpm -ivh RPMS/x86_64/test-repo-1.0.0-1.el7.centos.x86_64.rpm

Check the RPM installation:

# ls -la /usr/local/test-repo/

List the installed RPM:

# rpm -qa | grep test-repo

test-repo-1.0.0-1.el7.centos.x86_64

List the files in the installed RPM:

# rpm -q --filesbypkg test-repo

test-repo                 /usr/local
test-repo                 /usr/local/test-repo
test-repo                 /usr/local/test-repo/hello

To find out which RPM a file belongs to:

# rpm -qf /usr/local/test-repo/hello

test-repo-1.0.0-1.el7.centos.x86_64

Remove the RPM:

# rpm -e test-repo

Reference:

https://fedoraproject.org/wiki/How_to_create_an_RPM_package

Wednesday, May 10, 2017

gpg --gen-key hangs at gaining enough entropy on CentOS 7

gpg --gen-key hangs at gaining enough entropy on CentOS 7

Solution 1:

Install random number generator:

# yum install rng-tools

# systemctl enable rngd

# systemctl restart rngd

Solution 2:

# dd if=/dev/sda of=/dev/zero

Reference:

https://serverfault.com/questions/471412/gpg-gen-key-hangs-at-gaining-enough-entropy-on-centos-6#

Sunday, April 30, 2017

Changing Ubuntu full Screen Resolution in a Hyper-V VM

Changing Ubuntu full Screen Resolution in a Hyper-V VM

# vi /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=hyperv_fb:1920×1080"

# update-grub
# sync; reboot

Reference:

https://blogs.msdn.microsoft.com/virtual_pc_guy/2014/09/19/changing-ubuntu-screen-resolution-in-a-hyper-v-vm/

Friday, April 14, 2017

Save the selected lines to a file

To save the selected lines to a file, select a region using visual mode and then enter:

:w /tmp/filename

To insert/paste text from a file:

:r /tmp/filename

Set up Xdebug

Set up Xdebug

https://github.com/joonty/vdebug

SELinux does not allow httpd to connect to other network resources by default. Turn it on:

# setsebool -P httpd_can_network_connect 1
# getsebool -a | grep httpd_can
or
# chcon -v -t httpd_sys_content_t /usr/lib64/php/modules/xdebug.so

Note: http://stackoverflow.com/questions/2207489/apache-not-loading-xdebug-but-does-when-started-from-the-command-line

Edit ~/.vimrc:

# vim ~/.vimrc

let g:vdebug_options = {}
let g:vdebug_options["port"] = 9009

Edit xdebug configuration:

# vim /etc/php.d/15-xdebug.ini

; Enable xdebug extension module
zend_extension=xdebug.so
;zend_extension=/usr/lib64/php/modules/xdebug.so

xdebug.default_enable=1
xdebug.remote_enable=1
xdebug.remote_handler=dbgp
xdebug.remote_host=localhost
xdebug.remote_port=9009
xdebug.remote_log=/tmp/xdebug.log
xdebug.remote_connect_back=0
xdebug.remote_autostart=0
xdebug.remote_mode=req

xdebug.max_nesting_level=1000

Sample script:

# vim test.php

<?php
here();

function here() {
    xdebug_break();
    echo name('Jun');
}

function name($name) {
    $name .= '1';
    return 'Hello ' . $name . ' ' . date('Y-m-d H:i:s');
}

Once in debugging mode, the following default mappings are available:

<F5>: start/run (to next breakpoint/end of script)
<F2>: step over
<F3>: step into
<F4>: step out
<F6>: stop debugging (kills script)
<F7>: detach script from debugger
<F9>: run to cursor
<F10>: toggle line breakpoint
<F11>: show context variables (e.g. after "eval")
<F12>: evaluate variable under cursor
:Breakpoint <type> <args>: set a breakpoint of any type (see :help VdebugBreakpoints)
:VdebugEval <code>: evaluate some code and display the result
<Leader>e: evaluate the expression under visual highlight and display the result

To stop debugging, press <F6>. Press it again to close the debugger interface.

If you can't get a connection, then chances are you need to spend a bit of time setting up your environment. Type :help Vdebug for more information.

Browser:

http://dru8.local/hello?XDEBUG_SESSION_START=1

Note: Append XDEBUG_SESSION_START=1 to the end of the URL.

Tuesday, April 11, 2017

Compile the latest Vim 8.0 on CentOS 7

Compile the latest Vim 8.0 on CentOS 7

Remove the existing vim if you have already installed it:

# yum list installed | grep -i vim

vim-common.x86_64                    2:7.4.160-1.el7                @base
vim-enhanced.x86_64                  2:7.4.160-1.el7                @base
vim-filesystem.x86_64                2:7.4.160-1.el7                @base
vim-minimal.x86_64                   2:7.4.160-1.el7                @anaconda

# yum remove vim-enhanced vim-common vim-filesystem

Note: You do not need to remove vim-minimal because sudo depends on it.

For Red Hat/CentOS users:

# yum install gcc make ncurses ncurses-devel
# yum git
# yum install ruby ruby-devel lua lua-devel luajit \
luajit-devel ctags python python-devel \
python3 python3-devel tcl-devel \
perl perl-devel perl-ExtUtils-ParseXS \
perl-ExtUtils-XSpp perl-ExtUtils-CBuilder \
perl-ExtUtils-Embed

or

# yum clean all
# yum grouplist
# yum groupinfo "Development Tools"
# yum groupinstall "Development tools"
# yum install ncurses ncurses-devel

For debian/Ubuntu users:

# apt-get remove vim vim-runtime vim-tiny vim-common

# apt-get install libncurses5-dev libgnome2-dev libgnomeui-dev \
libgtk2.0-dev libatk1.0-dev libbonoboui2-dev \
libcairo2-dev libx11-dev libxpm-dev libxt-dev python-dev \
python3-dev ruby-dev git

# apt-get install libncurses5-dev python-dev libperl-dev ruby-dev liblua5.2-dev

// Fix liblua paths
# ln -s /usr/include/lua5.2 /usr/include/lua
# ln -s /usr/lib/x86_64-linux-gnu/liblua5.2.so /usr/local/lib/liblua.so

If you want to install the optional packages With yum groupinstall command:

# vim /etc/yum.conf

group_package_types=default, mandatory, optional

Note: the default setting is default, mandatory.

Install ctags and cscope:

# yum install ctags cscope

Build vim:

# cd /usr/local/src

Download vim source (it is better to get it from GitHub because you can get all the latest patches from there):

# git clone https://github.com/vim/vim.git
# cd vim
or
# wget ftp://ftp.vim.org/pub/vim/unix/vim-7.4.tar.bz2
# tar -xjf vim-7.4.tar.bz2
# cd vim74

Show the configuration options:

# ./configure --help

Configure:

# ./configure --prefix=/usr --with-features=huge --enable-multibyte --enable-rubyinterp --enable-pythoninterp --enable-perlinterp --enable-luainterp --enable-cscope

Build:

# make
or
# make VIMRUNTIMEDIR=/usr/share/vim/vim74

# make install

re-hash the environment:

# hash -r

Check vim version:

# vim --version | less

VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Jul 24 2016 14:27:26)
Included patches: 1-2102
...
+lua +multi_byte +perl +python +ruby

Check vim patches:

# vim

:echo has("patch-7.4-2102")

1

Reference:

https://github.com/Valloric/YouCompleteMe/wiki/Building-Vim-from-source

https://gist.github.com/holguinj/11064609

http://www.fullybaked.co.uk/articles/installing-latest-vim-on-centos-from-source

http://www.vim.org/git.php

Wednesday, April 5, 2017

PHPUnit did not output the result in color

PHPUnit did not output the result in color when colors=true in phpunit.xml. It's because it was missing the posix extension, which is provided by the php-process package.

# yum install php71u-process

Tuesday, April 4, 2017

ReflectionException: Class PHPUnit_Framework_Error does not exist

ReflectionException: Class PHPUnit_Framework_Error does not exist

    /**
     * @expectedException PHPUnit_Framework_Error
     */
    public function testFailingInclude()
    {
        include 'not_existing_file.php';
    }

Solution 1:

/**
 * @expectedException \PHPUnit\Framework\Error\Warning
 */
public function testFailingInclude()
{
    include 'not_existing_file.php';
}

Solution 2:

public function testFailingInclude()
{
    $this->expectException(\PHPUnit\Framework\Error\Error::class);

    include 'not_existing_file.php';
}

Thursday, March 16, 2017

Redis: (error) CROSSSLOT Keys in request don't hash to the same slot

Redis: (error) CROSSSLOT Keys in request don't hash to the same slot

# redis-cli -c -p 30001

127.0.0.1:30001> SADD mycolor1 R G B
-> Redirected to slot [12383] located at 127.0.0.1:30003
(integer) 3
127.0.0.1:30003> SADD mycolor2 G B Y
-> Redirected to slot [60] located at 127.0.0.1:30001
(integer) 3
127.0.0.1:30001> SUNION mycolor1 mycolor2
(error) CROSSSLOT Keys in request don't hash to the same slot

In a cluster topology, the keyspace is divided into hash slots. Different nodes will hold a subset of hash slots.

Multiple keys operations, transactions, or Lua scripts involving multiple keys are allowed only if all the keys involved are in hash slots belonging to the same node.

Redis Cluster implements all the single key commands available in the non-distributed version of Redis. Commands performing complex multi-key operations like Set type unions or intersections are implemented as well as long as the keys all belong to the same node.

You can force the keys to belong to the same node by using Hash Tags:

> SADD {colorlib}.mycolor1 R G B
> SADD {colorlib}.mycolor2 G B Y
> SUNION {colorlib}.mycolor1 {colorlib}.mycolor2

Reference:

http://stackoverflow.com/questions/38042629/redis-cross-slot-error

http://redis.io/topics/cluster-spec#keys-hash-tag

Assign multiple IP addresses to one single network interface

Assign multiple IP addresses to one single network interface

Method 1:

# cd /etc/sysconfig/network-scripts/
# vim ifcfg-eno16777736

IPADDR0="192.168.6.60"
PREFIX0="24"

IPADDR1="192.168.6.61"
PREFIX1="24"

IPADDR2="10.0.0.2"
PREFIX2="24"
GATEWAY2="10.0.0.1"

Note: I am using a different subnet 10.0.0.0/24 here, too.

Note: If you run ifconfig command, these IP aliases would not show up. Because ifconfig is essentially deprecated. The replacement is the ip command.

# systemctl restart network.service
# ip addr

2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:40:a1:f2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.6.10/24 brd 192.168.6.255 scope global eno16777736
       valid_lft forever preferred_lft forever
    inet 10.0.0.2/24 brd 10.0.0.255 scope global eno16777736
       valid_lft forever preferred_lft forever
    inet 192.168.6.60/24 brd 192.168.6.255 scope global secondary eno16777736
       valid_lft forever preferred_lft forever
    inet 192.168.6.61/24 brd 192.168.6.255 scope global secondary eno16777736
       valid_lft forever preferred_lft forever

Method 2:

# cd /etc/sysconfig/network-scripts/
# cp ifcfg-eno16777736 ifcfg-eno16777736:0
# cp ifcfg-eno16777736 ifcfg-eno16777736:1

# vim ifcfg-eno16777736:0

DEVICE="eno16777736:0"
IPADDR="192.168.6.60"

# vim ifcfg-eno16777736:1

DEVICE="eno16777736:1"
IPADDR="192.168.6.61"

# systemctl restart network.service

Method 3:

If you would like to create a range of multiple IP addresses to a particular network interface:

# cd /etc/sysconfig/network-scripts/
# vim ifcfg-eno16777736

NM_CONTROLLED=NO

Note: This setting is required on Redhat/CentOS 7.x for enabling the range files, which allows us to utilize the range files by having the interface no longer be controlled by the Network Manager system.

# touch ifcfg-eno16777736-range0
# vim ifcfg-eno16777736-range0

IPADDR_START="192.168.6.63"
IPADDR_END="192.168.6.68"
PREFIX="24"
CLONENUM_START="0"

Note: CLONENUM_START is the number of virtual device the first IP address will be assigned to. If you have more than one range file, then you need to make sure that this number is set to the next available number.

# systemctl restart network.service

Reference:

http://www.tecmint.com/create-multiple-ip-addresses-to-one-single-network-interface/

https://www.ubiquityhosting.com/blog/configure-ip-ranges-on-centos-7-redhat-7/

http://askubuntu.com/questions/227457/ifconfig-not-showing-all-ips-bound-to-the-machine

Wednesday, March 15, 2017

What's the difference between Unix socket and TCP/IP socket when setting up PHP-FPM?

What's the difference between Unix socket and TCP/IP socket when setting up PHP-FPM?

A UNIX socket is an inter-process communication mechanism that allows bidirectional data exchange between processes running on the same machine.

IP sockets (especially TCP/IP sockets) are a mechanism allowing communication between processes over the network. In some cases, you can use TCP/IP sockets to talk with processes running on the same computer (by using the loopback interface).

UNIX domain sockets know that they’re executing on the same system, so they can avoid some checks and operations (like routing); which makes them faster and lighter than IP sockets. So if you plan to communicate with processes on the same host, this is a better option than IP sockets.

Edit: As per Nils Toedtmann's comment: UNIX domain sockets are subject to file system permissions, while TCP sockets can be controlled only on the packet filter level.
===
When you are using TCP, you are also using the whole network stack. Even if you are on the same machine, this implies that packets are encapsulated and decapsulated to use the network stack and the related protocols.

If you use unix domain sockets, you will not be forced to go through all the network protocols that are required otherwise. The sockets are identified solely by the inodes on your hard drive.
===
I believe that UNIX domain sockets in theory give better throughput than TCP sockets on the loopback interface, but in practice the difference is probably negligible.

Data carried over UNIX domain sockets don't have to go up and down through the IP stack layers.

re: Alexander's answer. AFAIK you shouldn't get any more than one context switch or data copy in each direction (i.e. for each read() or write()), hence why I believe the difference will be negligble. The IP stack doesn't need to copy the packet as it moves between layers, but it does have to manipulate internal data structures to add and remove higher-layer packet headers.

unix domain socket (UDS) work like system pipes and it send ONLY data, not send checksum and other additional info, not use three-way handshake as TCP sockets.
===
Unix sockets can have owners - users and groups, TCP sockets cannot. Therefore, Unix sockets are more secure - but you cannot separate your webserver (eg, NginX) from your PHP application server (eg. PHP5-FPM) across the network.
===

Reference:

http://serverfault.com/questions/124517/whats-the-difference-between-unix-socket-and-tcp-ip-socket

http://stackoverflow.com/questions/257433/postgresql-unix-domain-sockets-vs-tcp-sockets/257479#257479

http://unix.stackexchange.com/questions/91774/performance-of-unix-sockets-vs-tcp-ports

https://www.reddit.com/r/webhosting/comments/2mgyzg/unix_domain_socket_vs_tcp_can_someone_explain_and/

Wednesday, March 1, 2017

PHPStorm + XDebug Setup Walkthrough

# vim /etc/php/7.0/fpm/conf.d/20-xdebug.ini

zend_extension=xdebug.so

xdebug.remote_enable=1
xdebug.remote_port=9000
xdebug.profiler_enable=1
xdebug.profiler_output_dir="/tmp/xdebug"
xdebug.idekey="PHPSTORM"
xdebug.remote_autostart=1
xdebug.remote_host=localhost
xdebug.remote_mode=req
xdebug.remote_connect_back=1
xdebug.max_nesting_level=200
xdebug.var_display_max_depth=1000
xdebug.var_display_max_children=256
xdebug.var_display_max_data=4096

3. setup the IDE settings

preference > languages and framework > PHP >

3.1. set the language level to the correct PHP version of this project

3.2. set an interpreter (set the parent directory of where the bin directory of PHP executable is loaded)

3.2.1. click the … button > click the + button > other local > set PHP Excitable path,

to find the path type in the terminal: $ which php

example: /usr/local/Cellar/php56/5.6.5/bin/php

4. restart phpstorm

5. now let's make it work

5.1. run > edit configuration > click the green + button on the left > select b. php web application

5.2. name: anything example ur {application name - debugger}

5.3. server: localhost (browse > + > name: whatever | host: localhost or 127.0.0.1)

5.4. click ok

5.5. start url: the link of ur project homepage: http://127.0.0.1:80/SomethingNew/

5.6. click ok

Reference:

http://stackoverflow.com/questions/9183179/phpstorm-xdebug-setup-walkthrough

Sunday, February 26, 2017

How to do a regular expression replace string in MySQL?

How to do a regular expression replace string in MySQL?

Install a MySQL user defined function called lib_mysqludf_preg from:

http://www.mysqludf.org/

https://github.com/mysqludf/lib_mysqludf_preg

For RedHat / CentOS:

# yum -y install pcre-devel gcc make automake mysql-devel

For Debian / Ubuntu:

# apt-get update
# apt-get install build-essential libmysqld-dev libpcre3-dev

Install lib_mysqludf_preg:

# git clone https://github.com/mysqludf/lib_mysqludf_preg.git
# ./configure
# make
# make install
# make installdb
# make test

Example:

SELECT CONVERT( PREG_REPLACE( '/fox/i' , 'dog' , 'The brown fox' ) USING UTF8) as replaced;

Saturday, February 25, 2017

To reset admin password:

Method 1 - reset admin password directly in database:

mysql> SET @salt = MD5(UNIX_TIMESTAMP());
mysql> UPDATE admin_user SET password = CONCAT(SHA2(CONCAT(@salt, 'MyNewPassword'), 256), ':', @salt, ':1') WHERE username = 'admin';

Method 2 - using PHP to generate the password. Then, reset it in database:

# php -r '$salt = md5(time()); echo hash("sha256", $salt . $argv[1]).":$salt:1\n";' MyNewPassword

66bdd4e5008cad465a6cd23eb6ac3aa6ef4c65d07a179157bab11935f9f4d62f:a4506164831ba6f12474a3ffe57602d0:1

mysql> UPDATE admin_user SET password = '<code above>' WHERE username='admin';

Method 3 - Generating the password. Then, reset it in database:

Add the following line at the last line and loot at the footer of any page.

# vim pub/index.php

echo \Magento\Framework\App\ObjectManager::getInstance()->get("\Magento\Framework\Encryption\Encryptor")->getHash("MyNewPassword");

Method 4 - Create a new admin user. Then, reset the previous admin password:

# php bin/magento admin:user:create --admin-user=admin2 --admin-password=MyNewPassword2 --admin-email=admin@example.com --admin-firstname=admin --admin-lastname=admin

Reference:

http://magento.stackexchange.com/questions/90922/how-to-reset-lost-admin-password-in-magento-2/161792#161792

Wednesday, February 22, 2017

To dump access database to MySQL sql file

To dump access database to MySQL sql file

1. Use Navicat

https://www.navicat.com/

2. Use Access To MySQL by bullzip

http://www.bullzip.com/products/a2m/info.php

Wednesday, February 15, 2017

View Google Chrome downloaded cached images and videos

View Google Chrome downloaded cached images and videos

Type: chrome://cache/

Print SQL query for the collection for debugging in Magento 2

Print SQL query for the collection for debugging in Magento 2

$productCollection->getSelect()->assemble();

Change Session timeout in Magento 2

Change Session timeout in Magento 2

System > Configuration > Advanced > Admin > Security > Session Lifetime (Seconds)

# vim php.ini

session.gc_maxlifetime = 36000

Tuesday, February 14, 2017

Get product tier pricing programmatically in Magento 2

Get product tier pricing programmatically in Magento 2

<?php
use \Magento\Framework\App\Bootstrap;

#require __DIR__ . '/../app/bootstrap.php';
require '/www/mag2.local/app/bootstrap.php';

$bootstrap = Bootstrap::create(BP, $_SERVER);
$objectManager = $bootstrap->getObjectManager();

### Setting area code
### NOTE: for more info http://devdocs.magento.com/guides/v2.1/architecture/archi_perspectives/components/modules/mod_and_areas.html
$state = $objectManager->get('\Magento\Framework\App\State');
#$state->setAreaCode('base');

$productId = 1;
$objectManager = \Magento\Framework\App\ObjectManager::getInstance();
$product_obj = $objectManager->create('\Magento\Catalog\Model\Product')->load($productId);

getDefaultGroup($product_obj);
getAnyGroup($product_obj);

function getDefaultGroup($product_obj) {
 $tier_price = $product_obj->getTierPrice();

 if(count($tier_price) > 0){
  echo "price_id\tprice_qty\tprice\twebsite_price\n";

  foreach($tier_price as $price){
   echo $price['price_id'];
   echo "\t";
   echo $price['price_qty'];
   echo "\t";
   echo $price['price'];
   echo "\t";
   echo $price['website_price'];
   echo "\n";
  }
 } else {
  echo 'There is no tiering price for the default group.' . PHP_EOL;
 }
}

function getAnyGroup($product_obj) {
 $tier_price = $product_obj->getTierPrices();

 if(count($tier_price) > 0){
  echo "price_qty\tprice\tCustomerGroupId\n";

  foreach($tier_price as $price){
   echo $price->getQty();
   echo "\t";
   echo $price->getValue();
   echo "\t";
   echo $price->getCustomerGroupId();
   echo "\t";
   echo "\n";
   print_r($price->getData());
   echo "\t";
   echo "\n";
  }
 }
}

#print_r($tier_price);
#print_r(get_class_methods($price));

Get customer information programmatically in Magento 2

Get customer information programmatically in Magento 2

<?php
use \Magento\Framework\App\Bootstrap;

#require __DIR__ . '/../app/bootstrap.php';
require '/www/mag2.local/app/bootstrap.php';

$bootstrap = Bootstrap::create(BP, $_SERVER);

$objectManager = $bootstrap->getObjectManager();

### Setting area code
### NOTE: for more info http://devdocs.magento.com/guides/v2.1/architecture/archi_perspectives/components/modules/mod_and_areas.html
$state = $objectManager->get('\Magento\Framework\App\State');
#$state->setAreaCode('base');

$storeManager = $objectManager->get('\Magento\Store\Model\StoreManagerInterface');
$storeId = $storeManager->getStore()->getId();

#$websiteId = $storeManager->getWebsite()->getWebsiteId();
$websiteId = $storeManager->getStore($storeId)->getWebsiteId();

// Customer Factory to Create Customer
$customerFactory = $objectManager->get('\Magento\Customer\Model\CustomerFactory');
$customer = $customerFactory->create();
$customer->setWebsiteId($websiteId);
$customer->loadByEmail('test@example.com');  

$data = $customer->getData(); 
print_r($data);

$defaultBilling = $customer->getDefaultBillingAddress();
$defaultShipping = $customer->getDefaultShippingAddress();

foreach ($customer->getAddresses() as $address) {
 echo 'IsDefaultBillingAddress: ' . ($defaultBilling && $defaultBilling->getId() == $address->getId() ? 'Yes' : 'No') . PHP_EOL;
 echo 'IsDefaultShippingAddress: ' . ($defaultShipping && $defaultShipping->getId() == $address->getId() ? 'Yes' : 'No') . PHP_EOL;

 echo 'ID: ' . $address->getId() . PHP_EOL;
 echo 'First Name: ' . $address->getFirstname() . PHP_EOL;
 echo 'Last Name: ' . $address->getLastname() . PHP_EOL;
 echo 'Street: ' . implode("\n", $address->getStreet()) . PHP_EOL;
 echo 'City: ' . $address->getCity() . PHP_EOL;
 echo 'Country: ' . $address->getCountry() . PHP_EOL;
 echo 'Region: ' . $address->getRegion() . PHP_EOL;
 echo 'Postal Code: ' . $address->getPostcode() . PHP_EOL;
 echo 'Phone: ' . $address->getTelephone() . PHP_EOL;
 echo PHP_EOL;

 #print_r(get_class_methods($address));
 #break;
}

Get product information programmatically in Magento 2

Get product information programmatically in Magento 2

<?php
use \Magento\Framework\App\Bootstrap;

#require __DIR__ . '/../app/bootstrap.php';
require '/www/mag2.local/app/bootstrap.php';

$bootstrap = Bootstrap::create(BP, $_SERVER);
$objectManager = $bootstrap->getObjectManager();

### Setting area code
### NOTE: for more info http://devdocs.magento.com/guides/v2.1/architecture/archi_perspectives/components/modules/mod_and_areas.html
$state = $objectManager->get('\Magento\Framework\App\State');
#$state->setAreaCode('base');

$storeManager = $objectManager->get('\Magento\Store\Model\StoreManagerInterface');
$storeId = $storeManager->getStore()->getId();

$productCollectionFactory = $objectManager->get('\Magento\Catalog\Model\ResourceModel\Product\CollectionFactory');
$productCollection = $productCollectionFactory->create();
$productCollection->addAttributeToSelect('*');

foreach ($productCollection as $product) {
 echo 'Id: ' . $product->getId() . PHP_EOL;
 echo 'Sku: ' . $product->getSku() . PHP_EOL;
 echo 'Price: ' . $product->getPrice() . PHP_EOL;
 echo 'Weight: ' . $product->getWeight() . PHP_EOL;
 print_r($product->getData());
 echo PHP_EOL;
}

Get store information programmatically in Magento 2

Get store information programmatically in Magento 2

<?php
use \Magento\Framework\App\Bootstrap;

#require __DIR__ . '/../app/bootstrap.php';
require '/www/mag2.local/app/bootstrap.php';

$bootstrap = Bootstrap::create(BP, $_SERVER);
$objectManager = $bootstrap->getObjectManager();

### Setting area code
### NOTE: for more info http://devdocs.magento.com/guides/v2.1/architecture/archi_perspectives/components/modules/mod_and_areas.html
$state = $objectManager->get('\Magento\Framework\App\State');
#$state->setAreaCode('base');

$storeManager = $objectManager->get('\Magento\Store\Model\StoreManagerInterface');
$storeId = $storeManager->getStore()->getId();

$baseURL = $storeManager->getStore($storeId)->getBaseUrl();
$mediaBaseURL = $storeManager->getStore($storeId)->getBaseUrl(\Magento\Framework\UrlInterface::URL_TYPE_MEDIA);
$linkBaseURL = $storeManager->getStore($storeId)->getBaseUrl(\Magento\Framework\UrlInterface::URL_TYPE_LINK);

$websiteId = $storeManager->getStore($storeId)->getWebsiteId();
$storeCode = $storeManager->getStore($storeId)->getCode();
$storeName = $storeManager->getStore($storeId)->getName();
$currentUrl = $storeManager->getStore($storeId)->getCurrentUrl();
$isActive = $storeManager->getStore($storeId)->isActive();
$isFrontUrlSecure = $storeManager->getStore($storeId)->isFrontUrlSecure();
$isCurrentlySecure = $storeManager->getStore($storeId)->isCurrentlySecure();

echo 'baseURL: ' . $baseURL . PHP_EOL;
echo 'mediaBaseURL: ' . $mediaBaseURL . PHP_EOL;
echo 'linkBaseURL: ' . $linkBaseURL . PHP_EOL;
echo 'websiteId: ' . $websiteId . PHP_EOL;
echo 'storeCode: ' . $storeCode . PHP_EOL;
echo 'storeName: ' . $storeName . PHP_EOL;
echo 'currentUrl: ' . $currentUrl . PHP_EOL;
echo 'isActive: ' . $isActive . PHP_EOL;
echo 'isFrontUrlSecure: ' . var_export($isFrontUrlSecure, true) . PHP_EOL;
echo 'isCurrentlySecure: ' . var_export($isCurrentlySecure, true) . PHP_EOL;