tag:blogger.com,1999:blog-1812624825100429212024-02-19T20:52:39.633+07:00It's a note not a blogDaily technical notes of Rio AstamalUnknownnoreply@blogger.comBlogger88125tag:blogger.com,1999:blog-181262482510042921.post-82221480373556592952020-11-24T08:10:00.006+07:002020-11-24T08:25:20.767+07:00Disable Homebrew Auto Update<p>Homebrew will do self update when we install a package. Sometimes this is not what we want. We are okay using an old version of Homebrew and some old packages.</p>
<p>To prevent Homebrew doing automatic update just set an environment variable named `HOMEBREW_NO_AUTO_UPDATE`.</p>
<pre><code>$ HOMEBREW_NO_AUTO_UPDATE=1 brew install [PACKAGE]
<link crossorigin='anonymous' href='https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/styles/shCoreDefault.min.css' integrity='sha256-+6BtzNuRjIOAeRxs56BdQS0n2/5w5HLMEoJsXdMP5TI=' rel='stylesheet'/>
</code></pre>
<h4>References</h4>
<ul><li><a href="https://github.com/Homebrew/brew/issues/1670">https://github.com/Homebrew/brew/issues/1670</a></li></ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-9676687038541615132020-09-04T20:41:00.002+07:002020-11-24T08:29:47.014+07:00Terraform: Define AWS Security Group for All ICMP Traffic <h3>Define AWS Security Group for All ICMP Traffic in Terraform</h3>
It is not quite well documented in Terraform what "from" and "to" port number that need to define to allow All ICMP traffic. If you're having hard time trying to figure out here is the solution to All ICMP traffic in Security Group.
<pre>
<code>...
ingress {
...
from_port = -1
to_port = -1
protocol = "icmp"
...
}
...
</code></pre>
That's it. The special number "-1" did the trick.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-50809490040745335062020-07-13T12:52:00.002+07:002020-11-24T08:31:26.307+07:00How to Kill Background Child Process in Bash<h3>Terminate Background Child Process in Bash</h3>
<p>In Bash by child process that sent into background is still alive when main program is terminated. Take a look an example below.</p>
<pre><code>#!/bin/bash
# Run a Python web server
python -m http.server src/ &
# Run SASS watcher and compiler...
sass --watch src/scss:src/css &
wait</code></pre>
When we run script above and terminate using CTRL+C Python and SASS are still running.
<h3>Solution to terminate Background Child Process in Bash</h3>
<p>The solution to above problem is using shell built-in <code>trap</code> command.</p>
<pre><code>#!/bin/bash
# Kill all child process (Python and SASS) when exit
trap "kill 0" EXIT
# Run a Python web server
python -m http.server src/ &
# Run SASS watcher and compiler...
sass --watch src/scss:src/css &
wait</code></pre>
<p>Now when the script is exited all the child process even which has been sent as background process will also terminated.</p>
<h4>References</h4>
<ul><li><a href="https://spin.atomicobject.com/2017/08/24/start-stop-bash-background-process/">https://spin.atomicobject.com/2017/08/24/start-stop-bash-background-process/</a></li></ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-73985558983538599382020-05-28T12:32:00.003+07:002020-11-24T08:32:35.562+07:00Terraform S3 Having Problem with Leading Slash<h3>Problem with S3 Leading Slash in Terraform</h3>
<p>S3 accept leading slash "/" and automatically strip them off. When we use it in Terraform as a S3 key it may looks fine until we use it in another object. See example below.</p>
<pre><code># This bucket is used to store Lambda function and layer
resource "aws_s3_bucket" "deno" {
bucket = var.default_bucket
acl = "private"
tags = var.default_tags
}
# Upload the layer to S3
resource "aws_s3_bucket_object" "deno_func" {
bucket = aws_s3_bucket.deno.id
tags = var.default_tags
# / in front "deno-custom-runtime/function.zip" below creating problem
key = "/deno-custom-runtime/function.zip"
source = "${path.module}/../build/function.zip"
etag = filemd5("${path.module}/../build/function.zip")
}
# Deno Layer
resource "aws_lambda_layer_version" "deno" {
layer_name = "TeknocerdasDenoRuntime"
s3_bucket = aws_s3_bucket.deno.id
s3_key = aws_s3_bucket_object.deno_layer.key
s3_object_version = aws_s3_bucket_object.deno_layer.version_id
compatible_runtimes = ["provided"]
description = "Custom Deno runtime by TeknoCerdas.com"
source_code_hash = filebase64sha256("${path.module}/../build/layer.zip")
}</code></pre>
When applying the resources we should get error below.
<pre><code>Error: Error creating lambda layer: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
{
RespMetadata: {
StatusCode: 400,
RequestID: "888bed7e-5345-4d5e-ab0e-0d8c683f49b2"
},
Message_: "Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.",
Type: "User"
}</code></pre>
<h3>Solution to S3 Leading Slash in Terraform</h3>
<p>The solution is simply remove the leading slash from the key or filename. So instead of writing <code>/deno-custom-runtime/function.zip</code> use <code>deno-custom-runtime/function.zip</code>.</p>
<p>Problem solved. Simple and stupid.</p>
<h4>References</h4>
<ul><li><a href="https://github.com/hashicorp/terraform/pull/15738">https://github.com/hashicorp/terraform/pull/15738</a></li></ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-60082498214569510512020-04-08T04:18:00.002+07:002020-11-24T08:33:51.162+07:00How to Escape Character when Running SSH Remote Command<h3>Solution of Escaping Character on SSH</h3>
<p>The solution is DO NOT escape it. Use cat HEREDOC to build the string of commands and then pass it to SSH.</p>
<pre><code>$ export VARIABLE="FOO BAR"
$ cat <<EOF | ssh user@myserver bash
echo "This is complex command"
echo "Another command that take a $VARIABLE."
sudo mkdir /tmp/foo && echo "SUCCESS"
EOF</code></pre>
<p>If you have a lot of $ dollar signs in your script and do not want local shell to interpret it then use single quote HEREDOC.</p>
<pre><code>$ export VARIABLE_NAME="FOO BAR"
$ cat <<'EOF' | ssh user@myserver bash
export VARIABLE="SSH VAR"
echo "This is complex command"
echo "Another command that take a $VARIABLE name."
sudo mkdir /tmp/foo && echo "SUCCESS"
EOF</code></pre>
<p>On command above the variable $VARIABLE value is "SSH VAR" since it is took the value from remote server not from local machine.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-62929956838977887602020-03-10T20:12:00.002+07:002020-11-24T08:33:21.836+07:00Variable Variables in Shell<h3>Subtitute Variable inside Variable in Bash</h3>
<p>Example below is using Bash for variable variables subtitution.</p>
<pre><code>$ hello="Hello World"
$ foobar="hello"
$ echo "${!foobar}"
Hello World</code></pre>
<h3>Subtitute Variable inside Variable using eval</h3>
<p>This is for other shell which do not recognize "${!}" syntax. It utilise eval so use it with caution.</p>
<pre><code>$ hello="Hello World"
$ foobar="hello"
$ eval echo "\$${foobar}"
Hello World</code></pre>
<h3>References for Variable Variables in Shell</h3>
<ul>
<li><a href="https://stackoverflow.com/a/2634767/2566677">https://stackoverflow.com/a/2634767/2566677</a></li>
<li><a href="http://mywiki.wooledge.org/BashFAQ/006">http://mywiki.wooledge.org/BashFAQ/006</a></li>
</ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-48118084636334875752020-03-03T06:55:00.004+07:002022-04-05T21:57:43.103+07:00Terraform: Force Destroy Resource when prevent_destroy is true<h2>How to Force Destroy Resource in Terraform</h2>
<p>Terraform resource that having lifecycle <code>prevent_destroy = true</code> can not be destroyed. You need to manually edit the file inplace and change the value <code>prevent_destroy</code> to <code>false</code> manually each time you want to destroy the resource. Instead of having to edit manually and make git status dirty we can automate this using simple shell script.</p>
<h3>Automate Force Destroy Resource in Terraform</h3>
<p>The idea is simple.</p>
<ol>
<li>Search all *.tf files and look the value of <code>prevent_default = true</code> to <code>prevent_default = false</code></li>
<li>Run <code>terraform destroy</code> command
<li>Revert the changes back to <code>prevent_default = true</code></li>
</ol>
Here is the implementation in Bash.
<pre><code>#!/bin/bash
$ find . -name '*.tf' -type f \
-exec perl -i -pe 's@prevent_destroy = true@prevent_destroy = false@g' {} \;
# Run terraform destroy
[ "$IS_PLAN" = "yes" ] && terraform plan -destroy || terraform destroy $@
# Revert the changes
$ find . -name '*.tf' -type f \
-exec perl -i -pe 's@prevent_destroy = false@prevent_destroy = true@g' {} \;</code></pre>
Save the file with name e.g: terraform-force-destroy.sh. To issue <code>terraform plan -destroy</code> command use the following.
<pre><code>$ IS_PLAN=yes bash terraform-force-destroy.sh</code></pre>
To force destroy the resource use the following command.
<pre><code>$ bash terraform-force-destroy.sh -auto-approve</code></pre>
You can give normal terraform's arguments just like the original <code>terraform destroy</code>.
<h4>References for Terraform Force Destroy</h4>
<ul>
<li><a href="https://github.com/hashicorp/terraform/issues/3640">https://github.com/hashicorp/terraform/issues/3640</a></li>
<li><a href="https://stackoverflow.com/questions/38354725/prevent-sed-from-adding-newlines-at-the-end-of-files">https://stackoverflow.com/questions/38354725/prevent-sed-from-adding-newlines-at-the-end-of-files</a></li>
<li><a href="https://spacelift.io/blog/how-to-destroy-terraform-resources">https://spacelift.io/blog/how-to-destroy-terraform-resources</a></li>
</ul>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-181262482510042921.post-721865488620729302019-10-25T06:05:00.001+07:002019-10-25T07:17:16.123+07:00Expose module in global scope using Browserify or Webpack<h3>Goal</h3>
<p>You want to expose a module as in global scope so it can be called in HTML file. For example we will create a small function for reversing a string. We will expose it as <code>StrReverse</code>.</p>
<pre class="brush: plain; gutter: true">
// File main.js
module.exports = function(str) {
return str.split('').reverse().join('');
}
</pre>
<h3>Browserify</h3>
<pre class="brush: plain; gutter: false">
$ browserify --standalone StrReverse main.js --outfile bundle.js
</pre>
<p>The key is <code>--standalone</code> parameter.</p>
<h3>Webpack</h3>
<pre class="brush: plain; gutter: false">
$ webpack-cli --mode=none --output-library StrReverse main.js --output bundle.js
</pre>
<p>The key is <code>--output-library</code> parameter.</p>
<h3>Test in HTML</h3>
<p>Create a HTML file and include <code>bundle.js</code> via <code><script></code> tag.</p>
<pre class="brush: plain; gutter: true">
<!DOCTYPE html>
<html>
<body>
<script src="bundle.js"></script>
var reversed = StrReverse("Hello World");
document.write(reversed);
</body>
</html>
</pre>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-44092591860659091862019-07-29T22:59:00.000+07:002019-07-29T23:00:42.729+07:00 Compile Swoole Extension on MacOS using Homebrew<h2>What is Swoole</h2>
<p>Swoole is Production-Grade Async programming Framework for PHP. It helps you write high-performance asynchronous non-blocking I/O. Similar with Go or NodeJS.</p>
<h2>How to Compile using Homebrew</h2>
<p>I am using MacOS High Sierra and PHP 7.2.20.</p>
<pre class="brush: plain; gutter: false">
$ export LD_LIBRARY_PATH=$( brew --prefix openssl )/lib
$ export CPATH=$( brew --prefix openssl)/include
$ export PKG_CONFIG_PATH=$( brew --prefix openssl )/lib/pkgconfig
$ pecl install swoole
...
Configuring for:
PHP Api Version: 20170718
Zend Module Api No: 20170718
Zend Extension Api No: 320170718
enable sockets supports? [no] : yes
enable openssl support? [no] : yes
enable http2 support? [no] : no
enable mysqlnd support? [no] : no
...
... some long message about compiling
...
Build process completed successfully
Installing '/usr/local/Cellar/php@7.2/7.2.20/include/php/ext/swoole/config.h'
Installing '/usr/local/Cellar/php@7.2/7.2.20/pecl/20170718/swoole.so'
install ok: channel://pecl.php.net/swoole-4.4.2
Extension swoole enabled in php.ini
</pre>
<a name='more'></a>
Now to make sure it is perfectly installed run following command.
<pre class="brush: plain; gutter: false">
php -m|grep swoole
swoole
</pre>
By default the statement to load swoole extension is put at <code>/usr/local/etc/php/7.2/php.ini</code>. I prefer to delete line below at <code>php.ini</code>.
<pre class="brush: plain; gutter: false">
extension="swoole.so"
</pre>
Then I move the extension loading to file <code>/usr/local/etc/php/7.2/conf.d/swoole.ini</code>.
<pre class="brush: plain; gutter: false">
$ cat > /usr/local/etc/php/7.2/conf.d/swoole.ini
extension="swoole.so"
CTRL+D
</pre>
<h3>Reference</h3>
<p><a href="https://github.com/libimobiledevice/libimobiledevice/issues/389">https://github.com/libimobiledevice/libimobiledevice/issues/389</a></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-36181027905939980872019-03-14T09:55:00.000+07:002019-03-14T09:55:26.055+07:00MySQL DISTINCT with Case Sensitive<h3>Goals</h3>
<p>We want MySQL distinct to use case sensitive grouping because by default MySQL use case insensitive.</p>
<h3>Solution of MySQL DISTINCT case sensitive</h3>
<p>We can use binary operator to convert the character set.</p>
<pre class="brush: plain; gutter: false">
SELECT DISTINCT CAST(expr as BINARY)
</pre>
As an alternative we can just use BINARY.
<pre class="brush: plain; gutter: false">
SELECT BINARY expr
</pre>
<h3>References</h3>
<ul>
<li><a href="https://stackoverflow.com/questions/19462919/mysql-select-distinct-should-be-case-sensitive">https://stackoverflow.com/questions/19462919/mysql-select-distinct-should-be-case-sensitive</a></li>
<li><a href="https://dev.mysql.com/doc/refman/5.6/en/charset-binary-set.html">https://dev.mysql.com/doc/refman/5.6/en/charset-binary-set.html</a></li>
</ul>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-64912305512300794852019-03-08T20:54:00.002+07:002019-03-14T09:51:12.412+07:00Disable Word Wrap on MySQL Shell<h3>Goals</h3>
<p>Turn off or disable word wrap on MySQL shell</p>
<h3>Solution of Disable Word Wrap on MySQL Shell</h3>
<p>We can use external pager such as <code>less</code> to do the job. Pager in MySQL shell actually is a pipe to another program.</p>
<pre class="brush: plain; gutter: false">
mysql> pager less -SFX
PAGER set to 'less -SFX'
</pre>
<p>That's it. Simple and easy. Now when you have very long output horizontally it will not wrap.</p>
<h3>Reference</h3>
<ul><li><a href="http://blog.clearandfizzy.com/post/145657760526/mysql-turn-off-word-wrap">http://blog.clearandfizzy.com/post/145657760526/mysql-turn-off-word-wrap</a></li></ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-9851109728154880232018-05-07T06:48:00.000+07:002018-05-07T06:49:31.547+07:00Curl Dump Response Headers to STDOUT and Ignore Response Body<h3>Goals</h3>
<p>Return only HTTP response header when opening a web page using curl. This is useful when we are interested in processing response headers only.</p>
<h3>Command</h3>
<p>We will utilize /dev/stdout and /dev/null to achieve what we want.</p>
<pre class="brush: plain; gutter: false">
$ curl https://notes.rioastamal.net -D /dev/stdout -o /dev/null --silent
HTTP/2 200
date: Sun, 06 May 2018 23:42:37 GMT
content-type: text/html; charset=UTF-8
set-cookie: __cfduid=d6347d57f364b276150034b241b19cdb01525650157; expires=Mon, 06-May-19 23:42:37 GMT; path=/; domain=.rioastamal.net; HttpOnly
expires: Sun, 06 May 2018 23:42:37 GMT
cache-control: private, max-age=0
last-modified: Sun, 06 May 2018 23:42:05 GMT
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
server: cloudflare
cf-ray: 416f4deb6f51a320-HKG
</pre>
<h3>Reference</h3>
<ul>
<li><a href="https://unix.stackexchange.com/questions/16357/usage-of-dash-in-place-of-a-filename">https://unix.stackexchange.com/questions/16357/usage-of-dash-in-place-of-a-filename</a></li></ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-73837191236715006262018-05-03T08:18:00.001+07:002018-05-03T08:23:24.895+07:00Generate Random String in Shell Using /dev/urandom<h3>Goals</h3>
<p>Generate random string in Shell and using /dev/urandom as the source. This random string typically useful to be used as encryption key.</p>
<h3>Implementation</h3>
<p>We will use combination of <strong>tr</strong> and <strong>head</strong> to generate 32 random characters. Command below will only output alphanumeric and some characters symbol only.</p>
<pre class="brush: php;gutter: false">
$ </dev/urandom tr -dc 'A-Za-z0-9!"#$%&()*+,-./:;<=>?@[\]^_`{|}~' | head -c 32 && echo
}s9s2c8W7aZlI:yg<{bg&-<7YnyJEk.u
</pre>
<p>On Mac OS X system you may need to define LC_ALL=C environment variable as shown below.</p>
<pre class="brush: php;gutter: false">
$ LC_ALL=C </dev/urandom tr -dc 'A-Za-z0-9!"#$%&()*+,-./:;<=>?@[\]^_`{|}~' | head -c 32 && echo
f(s_TPj*.H3Z/s[*:zLe[=9&0$FF"*8[
</pre>
<h3>References</h3>
<ul><li><a href="https://www.howtogeek.com/howto/30184/10-ways-to-generate-a-random-password-from-the-command-line/">https://www.howtogeek.com/howto/30184/10-ways-to-generate-a-random-password-from-the-command-line/</a></li>
<li><a href="https://unix.stackexchange.com/questions/230673/how-to-generate-a-random-string">https://unix.stackexchange.com/questions/230673/how-to-generate-a-random-string</a></li></ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-6148899526770381172018-04-03T06:03:00.000+07:002018-05-03T06:40:51.608+07:00How to Flatten Multidimensional Array in PHP<h3>Goals</h3>
<p>Turn PHP multi-dimensional array into one dimensional array (Flatten).</p>
<h3>Solution</h3>
<p>We will use Standard PHP Library (SPL) to tackle the problem. Assume we have an array like below.</p>
<pre class="brush: php;gutter: false">
$origin = [
'Level 1',
'_2_' => [
'Level 2',
'_3_' => [
'Level 3',
'_4_' => [
'Level 4',
'_5_' => [
'Level 5'
]
]
]
],
'Another Level 1',
'_2_1' => [
'Another Level 2'
]
];
</pre>
<p>Turn it into one dimensional array by using SPL <code>RecursiveIteratorIterator</code> and <code>RecursiveArrayIterator</code> class.</p>
<a name='more'></a>
<pre class="brush: php;gutter: false">
$flat = [];
foreach (new RecursiveIteratorIterator(new RecursiveArrayIterator($origin)) as $leaf) {
$flat[] = $leaf;
}
print_r($flat);
</pre>
<p>We should get flattened array.</p>
<pre class="brush: plain;gutter: false">
Array
(
[0] => Level 1
[1] => Level 2
[2] => Level 3
[3] => Level 4
[4] => Level 5
[5] => Another Level 1
[6] => Another Level 2
)
</pre>
<h3>Reference</h3>
<ul>
<li><a href="http://php.net/manual/en/recursiveiteratoriterator.construct.php">http://php.net/manual/en/recursiveiteratoriterator.construct.php</a></li>
</ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-45190955565947635822017-01-25T21:07:00.003+07:002017-01-25T21:11:29.432+07:00How to Remove Multiple Redis Cache<h3>Goals</h3>
<p>Remove multiple Redis cache key using one liner command.</p>
<h3>Steps</h3>
<p>Assume keys that we want to delete are keys beginning with 'laravel:' prefix.</p>
<pre class="brush: plain;gutter: false">
$ redis-cli KEYS 'laravel:*'
1) "laravel:featured:2bf46b40ed15"
2) "laravel:list:5509f57fdaef"
3) "laravel:list:2bf46b40ed15"
4) "laravel:list:e1bf5bfc2ab8"
5) "laravel:list-total-rec:adc81409d693"
6) "laravel:list-total-rec:e1bf5bfc2ab8"
7) "laravel:promotion-list:adc81409d693"
8) "laravel:list:d1f68333e3bb"
9) "laravel:list:c6a45c5cf5c5"
</pre>
<p>Just pipe the output above to xargs command. By default redis-cli will use raw format when STDOUT is not tty, which in case is true for xargs.</p>
<pre class="brush: plain;gutter: false">
$ redis-cli KEYS 'laravel:*' | xargs redis-cli DEL
(integer) 9
</pre>
<h3>Reference</h3>
<ul><li><a href="https://redis.io/commands/">https://redis.io/commands/</a></li></ul>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-181262482510042921.post-53543157799747176202016-11-25T20:06:00.001+07:002018-04-03T09:14:39.566+07:00Redis: How to Increase File Descriptor Limits<h3>Problem</h3>
<p>When you run redis server it complains can not set maximum open files because it has reached the OS max file descriptor limits. Here is the sample output.</p>
<pre class="brush:plain; gutter: false">
$ ./bin/redis-server
28436:C 25 Nov 20:10:03.978 # Warning: no config file specified, using the default config. In order to specify a config file use ./bin/redis-server /path/to/redis.conf
28436:M 25 Nov 20:10:03.979 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
28436:M 25 Nov 20:10:03.979 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
28436:M 25 Nov 20:10:03.979 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
[...CUT...]
</pre>
<p>When you try to increase the maximum file descriptor using ulimit as root by issuing sudo it returns an error.</p>
<pre class="brush:plain; gutter: false">
$ sudo ulimit -n 65000
sudo: ulimit: command not found
</pre>
<p>Wow, WTF is that? <code>ulimit</code> is a shell built so giving sudo an instruction to run a command called ulimit will not work. It will the same as statement below.</p>
<a name='more'></a>
<pre class="brush:plain; gutter: false">
$ sudo for
sudo: for: command not found
</pre>
<h3>Solution</h3>
<p>The easiest solution is become a root user to run set the file descriptor using ulimit. Right after that run redis as normal user in single command line.</p>
<pre class="brush:plain; gutter: false">
$ sudo sh -c "ulimit -n 65000 && exec su -c '/path/to/redis/bin/redis-server' rio"
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.2.5 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 30789
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
30789:M 25 Nov 20:33:03.362 # Server started, Redis version 3.2.5
30789:M 25 Nov 20:33:03.362 * DB loaded from disk: 0.000 seconds
30789:M 25 Nov 20:33:03.362 * The server is now ready to accept connections on port 6379
</pre>
<h3>References</h3>
<ul>
<li><a href="http://serverfault.com/questions/623577/why-does-redis-report-limit-of-1024-files-even-after-update-to-limits-conf/627650#627650">http://serverfault.com/questions/623577/why-does-redis-report-limit-of-1024-files-even-after-update-to-limits-conf/627650#627650</a></li>
<li><a href="http://unix.stackexchange.com/questions/81843/sudo-ulimit-command-not-found">http://unix.stackexchange.com/questions/81843/sudo-ulimit-command-not-found</a></li>
</ul>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-181262482510042921.post-74565789647926648972016-10-07T19:36:00.003+07:002016-10-07T19:37:56.475+07:00How to Fix No Sound After Mute and Unmute on XFCE<h3>Problem</h3>
<p>There is no sound after doing mute then unmute on XFCE 4 Ubuntu 14.04.</p>
<h3>Solution</h3>
<p>Try running <code>amixer</code> command to see the status of the Master sound.</p>
<pre class="brush:plain; gutter: false">
$ amixer get Master
Simple mixer control 'Master',0
Capabilities: pvolume pswitch pswitch-joined
Playback channels: Front Left - Front Right
Limits: Playback 0 - 65536
Mono:
Front Left: Playback 65536 [100%] [off]
Front Right: Playback 65536 [100%] [off]
</pre>
<p>In above result the status of Front Left and Front Right is [off]. Meaning it is still <i>muted</i> even has been unmuted from the XFCE panel. Try to toggle the switch to make it [on].</p>
<a name='more'></a>
<pre class="brush:plain; gutter: false">
$ amixer set Master toggle
Simple mixer control 'Master',0
Capabilities: pvolume pswitch pswitch-joined
Playback channels: Front Left - Front Right
Limits: Playback 0 - 65536
Mono:
Front Left: Playback 65536 [100%] [on]
Front Right: Playback 65536 [100%] [on]
</pre>
Try to play some music to see if it works.
<h3>Reference</h3>
<ul>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=139723">https://bbs.archlinux.org/viewtopic.php?id=139723</a></li>
</ul>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-181262482510042921.post-43061728881359959262016-09-24T21:30:00.001+07:002016-09-24T21:34:22.094+07:00How to Extract Specific Directory from Tarball<h3>Problem</h3>
<p>We have huge file of gzipped tarball and we want to extract only specific directory from the tarball.</p>
<h3>Solution</h3>
<p>Make sure the pattern we want to extract by searching it first. As an example we want to extract directory named <b>johndoe-website</b>, but we did not know the full pattern of the directory.</p>
<pre class="brush:plain; gutter: false">
$ tar tvf the-archive.tar.gz | grep johndoe-website
home/sites/clients/johndoe-website/javascripts/main.js
home/sites/clients/johndoe-website/styles/main.css
home/sites/clients/johndoe-website/index.html
</pre>
From the output above we knew that the pattern of the directory is <b>home/sites/clients/johndoe-website</b>. Command below will extract johndoe-website from the archive and strip the 3 leading directories.
<pre class="brush:plain; gutter: false">
$ tar xvf the-archive.tar.gz --strip-components=3 -C /destination/path home/sites/clients/johndoe-website
</pre>
Command above works in GNU Tar and BSD Tar (Mac OS X).
<a name='more'></a>
<h3>Reference</h3>
<ul>
<li><a href="http://unix.stackexchange.com/questions/3786/how-do-i-extract-a-specific-directory-from-a-tarball-and-strip-a-leading-direct">http://unix.stackexchange.com/questions/3786/how-do-i-extract-a-specific-directory-from-a-tarball-and-strip-a-leading-direct</a></li>
</ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-5178031369796822342016-07-27T20:13:00.001+07:002016-07-27T20:26:01.552+07:00Quickest Way: Using STDIN and Pipe to Copy SSH Public Key to Server<h3>Goal</h3>
<p>Copy SSH public key to another machine without using external tools such as ssh-copy-id - Only pure shell built-in or at least standard commands.</p>
<h3>Solution</h3>
<p>The solution is using shell STDIN and PIPE it to ssh.</p>
<pre class="brush:plain; gutter: false">
$ cat ~/.ssh/id_rsa.pub | ssh user@hostname 'cat >> .ssh/authorized_keys -'
</pre>
The quote for the ssh arguments is important because without it the redirection will goes to your local machine instead of remote machine. The "-" at the last of cat command on the remote indicate it reads the input from STDIN.
<h3>Reference</h3>
<ul>
<li><a href="http://askubuntu.com/questions/4830/easiest-way-to-copy-ssh-keys-to-another-machine">http://askubuntu.com/questions/4830/easiest-way-to-copy-ssh-keys-to-another-machine</a></li>
</ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-48721054833457350932016-07-20T23:05:00.001+07:002016-07-21T06:00:25.392+07:00Expose Port Inside Running Container on Docker Toolbox for Mac<h3>Problem</h3>
<p>Docker only allows to define port that need to be exposed when doing container creation. When the container already running and new port need to be exposed, you're out of luck.</p>
<h3>Goal</h3>
<p>You want to expose new port which run by application inside a running container, so you can hit the docker-vm-ip:port to access the port on Mac OS X.</p>
<h3>Assumptions</h3>
<ul>
<li>IP of Boot2Docker VM (Which run by Virtualbox) is <b>192.168.99.100</b></li>
<li>IP of the docker container running the application is <b>172.17.0.2</b></li>
<li>The application listen on address <b>0.0.0.0</b> and port <b>80</b></li>
</ul>
<a name='more'></a>
<h3>Steps</h3>
<ol>
<li>SSH to the Boot2Docker using user "docker" and password "tcuser"
<pre class="brush:plain; gutter: false">
$ ssh docker@192.168.99.100
docker@192.168.99.100's password:
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.11.2, build HEAD : a6645c3 - Wed Jun 1 22:59:51 UTC 2016
Docker version 1.11.2, build b9f10c9
docker@default:~$
</pre>
</li>
<li>Forward the port from 192.168.99.100:80 to 172.17.0.2:80
<pre class="brush:plain; gutter: false">
$ sudo iptables -t nat -A PREROUTING -d 192.168.99.100/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.2:80
</pre>
</li>
</ol>
Done. Now just try to connect from Mac OS X host to the Boot2Docker IP. It should forwarded to the application inside the container.
<pre class="brush:plain; gutter: false">
$ curl -i http://192.168.99.100
</pre>
You can repeat as many as you want for other port.
<h3>Reference</h3>
<ul>
<li><a href="https://github.com/boot2docker/boot2docker/issues/550">https://github.com/boot2docker/boot2docker/issues/550</a></li>
</ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-53609957192552706592016-06-18T09:47:00.001+07:002016-06-18T10:12:06.934+07:00How to Create Root Certificate Authority (CA) and Self Signed Certificate<h3>Goal</h3>
<p>Make client application such as web browser to trust our self signed certificate, so we can use any custom domain in development or internal network.</p>
<h3>Generate Root CA</h3>
<p>The first is to generate private key for our Certificate Authority (CA). Command below will generate RSA based private key 2048 bits key size.
<pre class="brush:plain; gutter: false">
$ mkdir self-root-ca && cd self-root-ca
$ openssl genrsa -out myRootCA.key
Generating RSA private key, 2048 bit long modulus
.................+++
................+++
e is 65537 (0x10001)
$ chmod 0600 myRootCA.key
</pre>
<p>Command above will produce a file called <code>myRootCA.key</code>. The chmod command will make sure that only super user and the creator of the key able to read the file.</p>
<a name='more'></a>
<p>Next we will create the root certificate which last for 1024 days (around 3 years). It will produce a file called <code>myRootCA.crt</code>. This is the file that you need to put on workstation or client application.</p>
<pre class="brush:plain; gutter: false">
$ openssl req -x509 -new -nodes -key myRootCA.key \
-sha256 -days 1024 -out myRootCA.crt \
-subj "/C=ID/ST=Bali/L=Badung/O=My Company Inc./OU=IT Security/CN=My Root CA/emailAddress=me@rioastamal.net"
</pre>
<p>To prevent interactive prompt we use -subj arguments.</p>
<ul>
<li><b>C</b>: Country Id</li>
<li><b>ST</b>: State/Province</li>
<li><b>L</b>: Location/City</li>
<li><b>O</b>: Organization name</li>
<li><b>OU</b>: Organization Unit name</li>
<li><b>CN</b>: Common Name</li>
<li><b>emailAddress</b>: Email address of person who responsible for this certificate</li>
</ul>
<p>Verify the certificate to make sure the generated file is correct.</p>
<pre class="brush:plain; gutter: false">
$ openssl x509 -in myRootCA.crt -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 10646048622805375477 (0x93be5cd137d9d9f5)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=ID, ST=Bali, L=Badung, O=My Company Inc., OU=IT Security, CN=My Root CA/emailAddress=me@rioastamal.net
Validity
Not Before: Jun 18 01:04:46 2016 GMT
Not After : Apr 8 01:04:46 2019 GMT
[..... SNIP .....]
</pre>
<p>Every time you need to generate new self certificate, you should use these two files myRootCA.key and myRootCA.crt as the Certificate Authority to sign.</p>
<h3>Generate Self Signed Certificate for Domain mycooldomain.local</h3>
<p>Create a private key for certificate that we want to generate. It will produce a file called mycooldomain.local.key.</p>
<pre class="brush:plain; gutter: false">
$ openssl genrsa -out mycooldomain.local.key 2048
Generating RSA private key, 2048 bit long modulus
.................................................................+++
........+++
e is 65537 (0x10001)
$ chmod 0600 mycooldomain.local.key
</pre>
<p>Second step is to generate the Certificate Signing Request (CSR). The most important thing when creating CSR is the <b>Common Name</b>. This option is used to specify the hostname which this certificate should be issued.</p>
<pre class="brush:plain; gutter: false">
openssl req -new -key mycooldomain.local.key \
-out mycooldomain.local.csr \
-subj "/C=ID/ST=Bali/L=Badung/O=My Company Inc./OU=My Cool Department/CN=mycooldomain.local/emailAddress=me@rioastamal.net"
</pre>
<p>The last step is to generate signed certificate by using the CSR we just created in addition with the keys from the root CA.</p>
<pre class="brush:plain; gutter: false">
$ openssl x509 -req -in mycooldomain.local.csr \
-CA myRootCA.crt -CAkey myRootCA.key -CAcreateserial \
-out mycooldomain.local.crt -days 730 -sha256
Signature ok
subject=/C=ID/ST=Bali/L=Badung/O=My Company Inc./OU=My Cool Department/CN=mycooldomain.local/emailAddress=me@rioastamal.net
Getting CA Private Key
</pre>
<p>Command above will produce a file called mycooldomain.local.crt. You can use this file and mycooldomain.local.key in server application like Apache or Nginx.</p>
<h3>Testing the Certificate</h3>
<h4>Import the Root CA into the Firefox</h4>
<p>Once the root certificate imported into the Firefox, it will trust all the certificate that signed using the root CA including our mycooldomain.local</p>
<ol>
<li>Open Menu (Edit) > Preference</li>
<li>Choose <b>Advance</b></li>
<li>Choose <b>Certificates</b> tab</li>
<li>Click <b>View Certificates</b></li>
<li>Choose <b>Authorities</b> tab</li>
<li>Click <b>Imports...</b></li>
<li>Locate myRootCA.crt</li>
<li>Put check mark on all the options</li>
<li>Click <b>OK</b>
</ol>
<h4>Using the Self Signed Certificate on Apache</h4>
<p>We will create new virtual host for domain mycooldomain.local. Run all these commands as root.</p>
<p>Enable SSL module</p>
<pre class="brush:plain; gutter: false">
$ sudo a2enmod ssl
$ sudo service apache2 restart
</pre>
<p>Create the virtual host file. Hit CTRL-D to finish writing the contents when using tee command. Change the path according where you store the certificate file.</p>
<pre class="brush:plain; gutter: false">
$ cd /etc/apache2/sites-available
$ sudo tee 002-mycooldomain.local.conf > /dev/null
<VirtualHost *:443>
ServerName mycooldomain.local
SSLEngine on
SSLCertificateFile /home/astadev/Documents/self-root-ca/mycooldomain.local.crt
SSLCertificateKeyFile /home/astadev/Documents/self-root-ca/mycooldomain.local.key
DocumentRoot /home/astadev/Documents/self-root-ca/htdocs
<Directory /home/astadev/Documents/self-root-ca/htdocs>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Require all granted
</Directory>
</VirtualHost>
$ sudo a2ensite 002-mycooldomain.local.conf
$ sudo service apache2 reload
</pre>
<p>If you don't have DNS server than just add mycooldomain.local into /etc/hosts. Try to visit https://mycooldomain.local, Firefox will says that this site is verified by My Company Inc.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXuCJCdQfBaZyQvOjYF-ETGoY4CQxGwrwbP6XuHzwBbr849KNXLZepzJcSTxc3qXUlKe1q-7YbVYvBGVC_r_Q7jLWnZaNeCG9EUkWiRgID2J_ysjEggfSJKPNYJfcDiNcVVyt4DXB7019C/s1600/certificate-root-ca-firefox.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXuCJCdQfBaZyQvOjYF-ETGoY4CQxGwrwbP6XuHzwBbr849KNXLZepzJcSTxc3qXUlKe1q-7YbVYvBGVC_r_Q7jLWnZaNeCG9EUkWiRgID2J_ysjEggfSJKPNYJfcDiNcVVyt4DXB7019C/s1600/certificate-root-ca-firefox.png" /></a></div>
<div style="clear:both;"> </div>
<h4>References</h4>
<ul>
<li><a href="https://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certificate-authority/">https://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certificate-authority/</a></li>
<li><a href="http://www.shellhacks.com/en/HowTo-Create-CSR-using-OpenSSL-Without-Prompt-Non-Interactive">http://www.shellhacks.com/en/HowTo-Create-CSR-using-OpenSSL-Without-Prompt-Non-Interactive</a></li>
<li><a href="https://jamielinux.com/docs/openssl-certificate-authority/index.html">https://jamielinux.com/docs/openssl-certificate-authority/index.html</a></li>
</ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-65163416964036620752016-06-14T20:08:00.001+07:002016-06-14T20:30:14.717+07:00Custom Solution for Managing ssh-agent without Gnome Keyring<h3>Goal</h3>
How to enter ssh private key password only once without having managed by Gnome Keyring. The ssh agent should remain detected every time new terminal spawned or even on tty console CTRL+ALT+F1 and so on.
<h3>Solutions</h3>
We will utilize ssh-add, ssh-agent and little bit shell script commands for achieving the goal.
<h4>Step 1</h4>
First start the authentication agent and redirect the result to a file so can gather the agent information later.
<pre class="brush:plain;">
$ ssh-agent -s > /tmp/my-ssh-agent.sh
</pre>
Execute the file so we have the correct environment variables needed by ssh-add.
<a name='more'></a>
<pre class="brush:plain;">
$ eval $( cat /tmp/my-ssh-agent.sh | grep -v ^echo )
</pre>
Now add our keys to the agent.
<pre class="brush:plain;">
$ ssh-add ~/.ssh/id_rsa
Enter passphrase for /home/rio/.ssh/id_rsa:
Identity added: /home/rio/.ssh/id_rsa (/home/rio/.ssh/id_rsa)
</pre>
Make sure the key is on the authentication agent list.
<pre class="brush:plain;">
$ ssh-add -l
2048 aa:bb:cc:dd:ee:ff:00:11:22:33:ab:bc:cd:de:ef:11 /home/rio/.ssh/id_rsa (RSA)
</pre>
<h4>Step 2</h4>
We need modify <code>~/.bashrc</code> so every time new bash session opened it will load the authentication agent saved on /tmp/my-ssh-agent.sh. Append this line at the end of .bashrc
<pre class="brush:plain;">
# Include our custom SSH Agent if found
MY_SSH_AGENT=/tmp/my-ssh-agent.sh
if [ -f $MY_SSH_AGENT ]; then
eval $( cat $MY_SSH_AGENT | grep -v ^echo )
fi
</pre>
<p>Done. Try to open new terminal session and execute <code>ssh-add -l</code> it will shows your current key that already on authentication agent. Every time you restart your computer you just need run step 1 to add your keys to the ssh authentication agent.</p>
<p>If you're paranoid you can use <code>ssh-add -c ~/.ssh/id_rsa</code> so when graphical window confirmation appears you just need to type "yes". See man ssh-add and ssh-askpass.</p>
<h4>References</h4>
<ul><li><a href="http://rabexc.org/posts/pitfalls-of-ssh-agents">http://rabexc.org/posts/pitfalls-of-ssh-agents</a></li></ul>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-18824477424839374732016-06-14T18:35:00.002+07:002016-06-14T20:09:57.851+07:00Stop Gnome Keyring for Managing ssh-agent on Xubuntu<h3>Goal</h3>
Stop Gnome keyring for managing ssh-agent on Ubuntu so you can use the original OpenSSH ssh-agent implementation.
<h3>Quick Solution</h3>
The solution is quite easy because Gnome Keyring daemon provide a way to replace the existing session.
<pre class="brush:plain;">
$ gnome-keyring-daemon --replace --daemonize --components=pkcs11,secrets,gpg
</pre>
Command above will replace the existing Gnome Keyring daemon but it removes the ability to manage the ssh agent. You can execute command below to make sure Gnome keyring does not manage the ssh agent anymore.
<a name='more'></a>
<pre class="brush:plain;">
$ ssh-add -l
Could not open a connection to your authentication agent.
</pre>
<h3>Permanent Solution</h3>
Quick solution will not persist once you logged out or restart Xubuntu. To make it permanent the command need to be added on Session and Startup.
<ol>
<li>Go to Menu > Settings > Session and Startup</li>
<li>Click Application Autostart tab</li>
<li>Click Add button</li>
<li>New application window will appear, you can fill it like example below
<ul>
<li><b>Name</b>: SSH Keyring Remover</li>
<li><b>Description</b>: Remove SSH from GNOME Keyring
<li><b>Command</b>: <code>gnome-keyring-daemon --replace --daemonize --components=pkcs11,secrets,gpg</code>
</ul></li>
<li>Click OK</li>
</ol>
Try to log out from your desktop session, once you logged in the Gnome keyring should not manage ssh agent anymore.
<h3>References</h3>
<ul>
<li><a href="http://askubuntu.com/questions/412793/xubuntu-stop-gnome-keyring-daemon-from-impersonating-ssh-agent">http://askubuntu.com/questions/412793/xubuntu-stop-gnome-keyring-daemon-from-impersonating-ssh-agent</a></li>
<li><a href="http://dtek.net/2012/09/19/how-stop-gnome-keyring-clobbering-opensshs-ssh-agent-ubuntu-1204.html">http://dtek.net/2012/09/19/how-stop-gnome-keyring-clobbering-opensshs-ssh-agent-ubuntu-1204.html</a></li>
</li>
</ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-69897375698211653222016-05-19T18:41:00.001+07:002016-06-14T18:37:01.830+07:00Starting Ngrok Automatically at Boot Using Upstart<h3>Goal</h3>
Expose SSH of the local machine to the internet using service provided by ngrok.com.
<h3>Steps</h3>
<p>
First thing first create an account at <a href="http://ngrok.com">ngrok.com</a> so we can get the Auth Token and also can monitor the tunnel created and know the address of the tunnel. Next is create a configurion file under ~/.ngrok2/ngrok.yml to store the token. You can get this token on your Ngrok dashboard.</p>
<a style="display:clear:both;" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWd16syzCIq5WDFectXaKR0ql_QjgII8ZBXIqwfLA4087KRkZxK0OjUrhnTD96PpIVZmsz6pFrdhnCpDZ0UtB3i_JgP2obkwVWdvasKigiBqByUUdMfXzIH56k5q0u1xX84wb9yC9jZ0eF/s1600/ngrok-auth-token.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWd16syzCIq5WDFectXaKR0ql_QjgII8ZBXIqwfLA4087KRkZxK0OjUrhnTD96PpIVZmsz6pFrdhnCpDZ0UtB3i_JgP2obkwVWdvasKigiBqByUUdMfXzIH56k5q0u1xX84wb9yC9jZ0eF/s1600/ngrok-auth-token.png" align="left" /></a>
<pre class="brush:plain;">
$ cat > ~/.ngrok2/ngrok.yml
authtoken: YOUR_NGROK_TOKEN
</pre>
Then create new file called ngrok.conf in /etc/init. Assuming the location of the ngrok binary is on <code>/opt/ngrok/ngrok</code>.
<a name='more'></a>
<pre class="brush:plain;">
$ cat > /etc/init/ngrok.conf
# Ngrok
#
# Create tunnel provided by ngrok.io
description "Ngrok Tunnel"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 10 5
umask 022
exec /opt/ngrok/ngrok tcp 22
</pre>
<p>To start the daemon manually issue following command.</p>
<pre class="brush:plain;">
$ sudo service ngrok start
$ ps aux|grep ngrok
root 712 0.1 0.6 420044 13200 ? Ssl 13:22 0:28 /opt/ngrok/ngrok tcp 22
root 2927 0.0 0.0 11740 936 pts/0 S+ 19:36 0:00 grep --color=auto ngrok
</pre>
<p>Go to your Ngrok dashboard and see the status of your tunnel. Normally ngrok will give you address something like <code>tcp://0.tcp.ngrok.io:XYZ</code> where XYZ is port number that mapped to your local port. So to connect via SSH we can point to that address and the given port.
<pre class="brush:plain;">
$ ssh -p XYZ user@0.tcp.ngrok.io
</pre>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjabt5c1PXOpPyrh08KAQIZ5PVVf9Lg398u_zB7NT5t7UmU5TkSI_Z02xbmx6KCu3LjnJq3DZUIDgcfkAM5uRwy6qs8yE1ikJz9vp2RjvurtlxvMJbt81fvf1gJiECyouKvYrPk8CeRRJJ3/s1600/ngrok-status.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjabt5c1PXOpPyrh08KAQIZ5PVVf9Lg398u_zB7NT5t7UmU5TkSI_Z02xbmx6KCu3LjnJq3DZUIDgcfkAM5uRwy6qs8yE1ikJz9vp2RjvurtlxvMJbt81fvf1gJiECyouKvYrPk8CeRRJJ3/s1600/ngrok-status.png" /></a></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-181262482510042921.post-56548023405792360432016-04-29T20:15:00.002+07:002016-04-29T20:40:34.220+07:00Simplify Multi-Hop SSH Connection Using Config<h3>Goal</h3>
Using SSH config to simplify connecting to another host from a host a.k.a multi-hop connection. Diagram for the connection:
<pre class="brush:plain;">
+---------------+
| Local Machine |
| 192.168.0.5 |
+---------------+
|
| SSH
\
\/
+-------------------------------------+
| Host Machine 192.168.0.10 |
| / \ |
| / -- SSH -- \ |
| +--------------+ +-------------+ |
| | Docker 1 | | Docker 2 | |
| | 172.17.0.1 | | 172.17.0.2 | |
| +--------------+ +-------------+ |
| |
+-------------------------------------+
</pre>
<a name='more'></a>
<h3>Steps</h3>
Recent OpenSSH implementation already having built-in netcat to proxy the connection from one host to the other using <code>-W</code> option.
<pre class="brush:plain;">
$ cat > ~/.ssh/config
Host host-machine
Hostname 192.168.0.10
User ubuntu
Host docker1
Hostname 172.17.0.1
User root
ProxyCommand ssh -A host-machine -W %h:%p
Host docker2
Hostname 172.17.0.2
User root
ProxyCommand ssh -A host-machine -W %h:%p
</pre>
Assuming your key already on host machine and docker container, now to connect to docker container 1 you just need to issue:
<pre class="brush:plain;">
$ ssh docker1
</pre>
<h4>References</h4>
<ul>
<li><a href="http://www.cyberciti.biz/faq/linux-unix-ssh-proxycommand-passing-through-one-host-gateway-server/">http://www.cyberciti.biz/faq/linux-unix-ssh-proxycommand-passing-through-one-host-gateway-server/</a></li>
<li><a href="http://sshmenu.sourceforge.net/articles/transparent-mulithop.html">http://sshmenu.sourceforge.net/articles/transparent-mulithop.html</a>
<li><a href="http://backdrift.org/transparent-proxy-with-ssh">http://backdrift.org/transparent-proxy-with-ssh</a></li>
</ul>
Unknownnoreply@blogger.com0