Javier López2024-03-01T16:54:58+00:00http://javier.io/shrink root volumes in AWS2024-03-01T00:00:00+00:00http://javier.io/blog/en/2024/03/01/shrink-root-volumes-aws<h2 id="shrink-root-volumes-in-aws">shrink root volumes in AWS</h2>
<h6 id="01-mar-2024">01 Mar 2024</h6>
<h2 id="volumes">Volumes</h2>
<ul>
<li>/dev/<strong>nvme1n1</strong> → Old volume → <strong>/old</strong></li>
<li>/dev/<strong>nvme2n1</strong> → New volume → <strong>/new</strong></li>
<li>/dev/<strong>nvme3n1</strong> → Backup Old volume (only applicable for XFS root volumes) → <strong>/old-backup</strong></li>
</ul>
<h2 id="general-instructions">General instructions</h2>
<ol>
<li>Create a snapshot of the root volume</li>
<li>Stop the target instance</li>
<li>Create a new volume with the desired size, ensure iops and speed are equal to the old volume</li>
<li>Create a tmp ec2 instance, this is what would be used to copy data between the old / new volumes</li>
<li>Detach the root volume from the old instance and attach it to the tmp one</li>
<li>Attach the new volume to the tmp instance and ssh into it</li>
<li><code class="language-plaintext highlighter-rouge">mkdir -p /old /new</code></li>
<li><code class="language-plaintext highlighter-rouge">dd bs=16M if=/dev/nvme1n1 of=/dev/nvme2n1 count=100</code> #copy bootloader from old to the new volume</li>
<li><code class="language-plaintext highlighter-rouge">fdisk /dev/nvme2n1</code> #format new volume
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>press 'p' take note on the start section, eg: 2048
delete the current partition with 'd'
create a new partition 'n' and use the previous start section
press 'a' to make the partition bootable
press 'w' to save changes
</code></pre></div> </div>
</li>
<li><code class="language-plaintext highlighter-rouge">fdisk -l</code> #review than old and new volumes looks similar</li>
</ol>
<p><strong>########## ext2/3/4 ##########</strong></p>
<ol>
<li><code class="language-plaintext highlighter-rouge">e2fsck -f /dev/nvme1n1p1</code> #check for errors in old volume</li>
<li><code class="language-plaintext highlighter-rouge">resize2fs -M -p /dev/nvme1n1p1</code> #move the data to the beginning of the partition</li>
</ol>
<p>In the previous command’s output, the last line tells you the number of blocks. Each block is sized 4K but when we clone the partition, we are going to do it in 16 MB blocks. So, in order to compute the number of 16 MB, blocks, multiply the number in the last line by 4 / (16 * 1024).</p>
<p>Round this number UP (not down) to the nearest integer. Example: 1252939 (number in last line) * 4 / (16 * 1024) = 305.893310546875 … But round this UP to 306 or even 310 (it doesn’t matter as long as you don’t go below).</p>
<p><code class="language-plaintext highlighter-rouge">dd bs=16M if=/dev/nvme1n1p1 of=/dev/nvme2n1p1 count=310</code> #copy data from old to new volume</p>
<ol>
<li><code class="language-plaintext highlighter-rouge">resize2fs -p /dev/nvme2n1p1</code> #new volume, expand fs</li>
<li><code class="language-plaintext highlighter-rouge">e2fsck -f /dev/nvme2n1p1</code> #fix possible errors in the new volume</li>
</ol>
<p><strong>########## xfs ##########</strong></p>
<ol>
<li>Create an additional volume same size as old volume and attached to the tmp instance. Will be use it to host a backup of the original fs, this is a workaround to xfs lack of shrinking capacity</li>
<li><code class="language-plaintext highlighter-rouge">mkfs.xfs /dev/nvme2n1 /dev/nvme3n1</code> #format the new and the additional volume</li>
<li><code class="language-plaintext highlighter-rouge">mount /dev/nvme1n1 /old; mount /dev/nvme2n1 /new</code></li>
<li><code class="language-plaintext highlighter-rouge">mkdir /old-backup; mount /dev/nvme3n1 /old-backup</code></li>
<li><code class="language-plaintext highlighter-rouge">xfsdump -L data -f /old-backup/old.xfsdump /old</code></li>
<li><code class="language-plaintext highlighter-rouge">xfsrestore -f /old-backup/old.xfsdump /new/</code></li>
<li><code class="language-plaintext highlighter-rouge">blkid /dev/nvme1n1p1</code> #get old uuid</li>
<li><code class="language-plaintext highlighter-rouge">xfs_admin -U <UUID from step above> /dev/nvme2n1p1</code> #apply old uuid to new volume</li>
<li><code class="language-plaintext highlighter-rouge">xfs_admin -L / /dev/nvme2n1p1</code></li>
</ol>
<h2 id="final-step">Final step</h2>
<p>Dettach the new volume, attach it to the target instance as <strong>/dev/sda1</strong> and start the instance</p>
<p>References:</p>
<ul>
<li><a href="https://medium.com/@ztobscieng/shrink-an-amazon-aws-ebs-root-volume-2020-update-8db834265c3e">https://medium.com/@ztobscieng/shrink-an-amazon-aws-ebs-root-volume-2020-update-8db834265c3e</a> <strong>ext2/3/4</strong></li>
<li><a href="https://medium.com/@benedikt.langens/how-to-shrink-an-ebs-root-volume-xfs-on-amazon-linux-2-2023-a7705c16e839">https://medium.com/@benedikt.langens/how-to-shrink-an-ebs-root-volume-xfs-on-amazon-linux-2-2023-a7705c16e839</a> <strong>xfs</strong></li>
</ul>
transfer files to ec2 instances via SSM and netcat2024-02-06T00:00:00+00:00http://javier.io/blog/en/2024/02/06/aws-cli-transfer-files-to-ec2-via-ssm<h2 id="transfer-files-to-ec2-instances-via-ssm-and-netcat">transfer files to ec2 instances via SSM and netcat</h2>
<h6 id="06-feb-2024">06 Feb 2024</h6>
<h2 id="step-1-run-netcat-on-the-target-ec2-machine-via-ssm">Step 1. Run netcat on the target EC2 machine via SSM</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws ssm start-session --target $INSTANCE_ID --document-name
ssm $ nc -l -p 9999 > $FILE_NAME
</code></pre></div></div>
<h2 id="step-2-in-another-shell-open-a-port-forwarding-session">Step 2. In another shell open a port-forwarding session:</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws ssm start-session --target $INSTANCE_ID --document-name AWS-StartPortForwardingSession --parameters '{"portNumber":["9999"],"localPortNumber":["9999"]}'
</code></pre></div></div>
<h2 id="step-3-in-a-3rd-shell-transfer-the-file-via-netcat">Step 3. In a 3rd shell transfer the file via netcat:</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ nc -w 3 127.0.0.1 9999 < $FILE_NAME
</code></pre></div></div>
<p>Profit!</p>
<p>References:</p>
<ul>
<li>https://gist.github.com/lukeplausin/4b412d83fb1246b0bed6507b5083b3a7</li>
</ul>
aws-cli ssm by ip2024-01-11T00:00:00+00:00http://javier.io/blog/en/2024/01/11/aws-cli-ssm-by-ip<h2 id="aws-cli-ssm-by-ip">aws-cli ssm by ip</h2>
<h6 id="11-jan-2024">11 Jan 2024</h6>
<h2 id="configure-ssm-in-aws">Configure SSM in AWS</h2>
<p>Before you start, make sure that your Amazon EC2 instances are configured correctly. This involves setting up an IAM role <code class="language-plaintext highlighter-rouge">AmazonSSMManagedInstanceCore</code> for Systems Manager and attaching it to your instances. You also need to ensure that the Systems Manager Agent (SSM Agent) is installed on your nodes.</p>
<p>The SSM Agent is preinstalled by default on Amazon Linux base AMIs dated 2017.09 and later and on Amazon Linux 2, Windows Server 2008-2012 R2 AMIs, and others. If your distribution is not included use <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html">userdata</a> to install it on first boot:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/bash
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb"
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_32bit/session-manager-plugin.deb" -o "session-manager-plugin.deb"
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.deb" -o "session-manager-plugin.deb"
dpkg -i session-manager-plugin.deb
</code></pre></div></div>
<h2 id="private-vpc">Private VPC</h2>
<p>If your EC2 instance is in a private network (without an Internet Gateway) you will need to set up 3 VPC Endpoints for AWS Systems Manager.</p>
<p>Go to <strong>VPC</strong> > <strong>Endpoints</strong> > <strong>Create Endpoint</strong> > <strong>Service category</strong> > <strong>AWS services</strong> > <strong>Service Name</strong></p>
<ul>
<li>com.amazonaws.us-west-2.ssm (or the region where your vpc is hosted)</li>
<li>com.amazonaws.us-west-2.ec2messages</li>
<li>com.amazonaws.us-west-2.ssmmessages</li>
</ul>
<p>In the <strong>VPC</strong> section, select the VPC of your EC2 instances. In the <strong>Security Group</strong> section, select a security group that allows HTTPS traffic (port 443) from and to your EC2 instances. In <strong>Policy</strong>, select <strong>Full Access</strong> if you want all instances in your VPC to be able to use SSM.</p>
<h2 id="configure-aws-cli-using-sso">Configure aws-cli using sso</h2>
<p>Install <a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">aws-cli</a> and the <a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/install-plugin-windows.html">windows plugin</a> if required.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws configure sso
SSO session name (Recommended): org-name
SSO start URL [None]: https://org-name.awsapps.com/start#/SSO
region [None]: us-west-2
SSO registration scopes [sso:account:access]:
There are 10 AWS accounts available to you. #select one option
Using the account ID 523868776147
There are 4 roles available to you.
Using the role name "PowerUserAccess"
CLI default client Region [None]: us-west-2
CLI profile name [PowerUserAccess-523868776147]: DevOps-PowerUser
To use this profile, specify the profile name using --profile, as shown:
aws s3 ls --profile DevOps-PowerUser
</code></pre></div></div>
<h2 id="connect-by-instance-id">Connect by instance-id</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws sso login --sso-session org-name; aws ssm start-session --target <instance-id> --profile DevOps-PowerUser
</code></pre></div></div>
<h2 id="connect-by-ip">Connect by ip</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws sso login --sso-session org-name; aws ssm start-session --target "$(aws ec2 describe-instances --filter Name=private-ip-address,Values=<private-ip> --query 'Reservations[].Instances[].InstanceId' --output text --region us-west-2 --profile DevOps-PowerUser)" --profile DevOps-PowerUser --region us-west-2
</code></pre></div></div>
<p>Profit!</p>
windows installers non interactive2024-01-05T00:00:00+00:00http://javier.io/blog/en/2024/01/05/windows-installers-non-interactive<h2 id="windows-installers-non-interactive">windows installers non interactive</h2>
<h6 id="05-jan-2024">05 Jan 2024</h6>
<h2 id="iis-85--windows-server-2012">IIS 8.5 / Windows Server 2012</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> import-module servermanager
pshell> add-windowsfeature Web-Server, Web-WebServer, Web-Security, Web-Filtering, Web-Cert-Auth, Web-IP-Security, Web-Url-Auth, Web-Windows-Auth, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-CertProvider, Web-Common-Http, Web-Http-Errors, Web-Dir-Browsing, Web-Static-Content, Web-Default-Doc, Web-Http-Redirect, Web-DAV-Publishing, Web-Performance, Web-Stat-Compression, Web-Dyn-Compression, Web-Health, Web-Http-Logging, Web-ODBC-Logging, Web-Log-Libraries, Web-Custom-Logging, Web-Request-Monitor, Web-Http-Tracing, Web-App-Dev, Web-Net-Ext45, Web-ASP, Web-Asp-Net45, Web-CGI, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-WebSockets, Web-AppInit, Web-Includes, Web-Ftp-Server, Web-Ftp-Service, Web-Ftp-Ext, Web-Mgmt-Tools, Web-Mgmt-Console, Web-Mgmt-Compat, Web-Metabase, Web-WMI, Web-Lgcy-Mgmt-Console, Web-Lgcy-Scripting, Web-Scripting-Tools, Web-Mgmt-Service –IncludeManagementTools
</code></pre></div></div>
<h2 id="iis-100--windows-server-2016">IIS 10.0 / Windows Server 2016</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> Install-WindowsFeature -name Web-Server -IncludeManagementTools
</code></pre></div></div>
<p>Confirm version installed:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> get-itemproperty HKLM:\SOFTWARE\Microsoft\InetStp\ | select setupstring,versionstring #show iis version
</code></pre></div></div>
<h2 id="iss-urlrewrite-module">ISS urlrewrite module</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> choco install -y urlrewrite
</code></pre></div></div>
<h2 id="chocolatey">Chocolatey</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
pshell> choco install wget -y
</code></pre></div></div>
<h2 id="mongodb">MongoDB</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> mkdir C:\bin
pshell> wget.exe https://downloads.mongodb.org/win32/mongodb-win32-x86_64-2008plus-ssl-3.4.24-signed.msi
pshell> msiexec.exe /quiet /i mongodb-win32-x86_64-2008plus-ssl-3.4.24-signed.msi INSTALLLOCATION="C:\bin\mongodb-win32-x86_64-2008plus-ssl-3.4.24\" ADDLOCAL="all"
pshell> While(Get-Process msiexec -ea si|?{$_.SI -ne 0}){} #wait until msiexec completes
pshell> mkdir D:\mongodb
pshell> C:\bin\mongodb-win32-x86_64-2008plus-ssl-3.4.24\bin\mongod --dbpath=D:\mongodb --logpath=D:\mongodb\log.txt --install --serviceName MongoDB
pshell> net start MongoDB
</code></pre></div></div>
<h2 id="nodejs">NodeJS</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> wget.exe --no-check-certificate https://nodejs.org/dist/v8.5.0/node-v8.5.0-x64.msi
pshell> msiexec.exe /quiet /i node-v8.5.0-x64.msi INSTALLDIR="C:\bin\nodejs-v8.5.0\" ADDLOCAL="all"
pshell> While(Get-Process msiexec -ea si|?{$_.SI -ne 0}){} #wait until msiexec completes
</code></pre></div></div>
<h2 id="iisnode">IISNode</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> wget.exe https://github.com/Azure/iisnode/releases/download/v0.2.26/iisnode-full-v0.2.26-x64.msi
pshell> #installs to C:\Program Files\iisnode
pshell> msiexec.exe /quiet /i iisnode-full-v0.2.26-x64.msi #do not allow to custom TARGETDIR="C:\bin\iisnode-full-v0.2.26\"
pshell> While(Get-Process msiexec -ea si|?{$_.SI -ne 0}){} #wait until msiexec completes
</code></pre></div></div>
<h2 id="python">Python</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> wget.exe --no-check-certificate https://www.python.org/ftp/python/3.6.5/python-3.6.5-amd64.exe
pshell> .\python-3.6.5-amd64.exe /quiet /i InstallAllUsers=0 TargetDir="C:\bin\python-3.6.5\"
pshell> While(Get-Process msiexec -ea si|?{$_.SI -ne 0}){} #wait until msiexec completes
</code></pre></div></div>
<h2 id="robo3t">Robo3t</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pshell> wget.exe https://github.com/Studio3T/robomongo/releases/download/v1.1.1/robo3t-1.1.1-windows-x86_64-c93c6b0.exe
pshell> .\robo3t-1.1.1-windows-x86_64-c93c6b0.exe /S /D=C:\bin\robo3t-1.1.1\
pshell> While(Get-Process robo3t -ea si|?{$_.SI -ne 0}){} #wait until msiexec completes
</code></pre></div></div>
<p>Profit!</p>
launch and suscribe RHEL 8/9 instances in AWS2023-10-15T00:00:00+00:00http://javier.io/blog/en/2023/10/15/launch-rhel-instances-in-aws<h2 id="launch-and-suscribe-rhel-89-instances-in-aws">launch and suscribe RHEL 8/9 instances in AWS</h2>
<h6 id="15-oct-2023">15 Oct 2023</h6>
<h2 id="simple-no-redhat-cloud-integration">Simple, no RedHat Cloud integration</h2>
<p>Go to EC2 and launch either RHEL8 or RHEL9 instances:</p>
<p><strong>RHEL8</strong>: <a href="https://aws.amazon.com/marketplace/pp/prodview-kv5mi3ksb2mma">AWS Marketplace: Red Hat Enterprise Linux 8</a></p>
<p>Ami Id: <strong>ami-0b324207d4bcaec61</strong></p>
<p><strong>RHEL9</strong>: <a href="https://aws.amazon.com/marketplace/pp/prodview-b5psjqk4f5f3k">AWS Marketplace: Red Hat Enterprise Linux 9</a></p>
<p>Ami Id: <strong>ami-026ebd4cfe2c043b2</strong></p>
<p>Ensure the AWS instance includes a RHEL suscription</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ curl http://169.254.169.254/latest/dynamic/instance-identity/document 2>/dev/null | grep billingProducts
"billingProducts" : [ "bp-6fa54006" ] #ID will change depending the RHEL version and must be != NULL
</code></pre></div></div>
<p>If the above command success, execute:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo vi /etc/yum/pluginconf.d/subscription-manager.conf
enabled=0
</code></pre></div></div>
<h3 id="enable-additional-repositories">Enable additional repositories</h3>
<p><strong>RHEL8</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo yum repolist all
$ sudo dnf config-manager --set-enabled codeready-builder-for-rhel-8-rhui-rpms rhel-8-supplementary-rhui-rpms
</code></pre></div></div>
<p><strong>RHEL9</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo yum repolist all
$ sudo dnf config-manager --set-enabled codeready-builder-for-rhel-9-rhui-rpms rhel-9-supplementary-rhui-rpms
</code></pre></div></div>
<p>Profit!</p>
<h2 id="with-redhat-cloud-and-insights-integration">With Redhat Cloud and Insights integration</h2>
<h3 id="redhat">RedHat</h3>
<ul>
<li><a href="https://www.redhat.com/wapps/ugc/register.html?_flowId=register-flow&_flowExecutionKey=e1s1">Create a RHEL account</a></li>
<li><a href="https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux/try-it?intcmp=701f20000012m1qAAA">Request a RHEL trial</a>, this step could be removed in the future, however for new accounts as Oct 12th 2023, it’s a requirement for registering RHEL aws instances.</li>
<li>Go to Red Hat and Enable the Simple Content Access Enablement (should be on by default for new accounts)
<img src="https://github.com/javier-lopez/javier.io/assets/75626/abc8f9bc-9fce-496e-9cec-ca5d865cf943" alt="rhel-simple-content-access" /></li>
</ul>
<h3 id="aws">AWS</h3>
<p>Go to EC2 and launch either RHEL8 or RHEL9 instances:</p>
<p><strong>RHEL8</strong>: <a href="https://aws.amazon.com/marketplace/pp/prodview-kv5mi3ksb2mma">AWS Marketplace: Red Hat Enterprise Linux 8</a></p>
<p>Ami Id: <strong>ami-0b324207d4bcaec61</strong></p>
<p><strong>RHEL9</strong>: <a href="https://aws.amazon.com/marketplace/pp/prodview-b5psjqk4f5f3k">AWS Marketplace: Red Hat Enterprise Linux 9</a></p>
<p>Ami Id: <strong>ami-026ebd4cfe2c043b2</strong></p>
<p>Once setup, login and suscribe using the RedHat Credentials</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo subscription-manager register --username <username>
$ sudo insights-client
</code></pre></div></div>
<h3 id="enable-additional-repositories-1">Enable additional repositories</h3>
<p><strong>RHEL8</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo yum repolist all
$ sudo dnf config-manager --set-enabled codeready-builder-for-rhel-8-rhui-rpms rhel-8-supplementary-rhui-rpms
</code></pre></div></div>
<p><strong>RHEL9</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo yum repolist all
$ sudo dnf config-manager --set-enabled codeready-builder-for-rhel-9-rhui-rpms rhel-9-supplementary-rhui-rpms
</code></pre></div></div>
<p>Profit!</p>
<p><strong>References</strong></p>
<ul>
<li><a href="https://access.redhat.com/articles/6538061">How to register a Red Hat Enterprise Linux system running on AWS to</a></li>
<li><a href="https://repost.aws/knowledge-center/ec2-yum-rhel-errors">Troubleshoot errors that are thrown when you use yum on an EC2 instance with RHEL</a></li>
</ul>
using colima to run docker on a mac2023-10-10T00:00:00+00:00http://javier.io/blog/en/2023/10/10/docker-osx<h2 id="using-colima-to-run-docker-on-a-mac">using colima to run docker on a mac</h2>
<h6 id="10-oct-2023">10 Oct 2023</h6>
<h2 id="install-docker--docker-compose">Install docker / docker-compose</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ brew install docker
$ brew install docker-compose
$ docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
</code></pre></div></div>
<h2 id="install-colima">Install colima</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ brew install colima
$ colima start
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
70f5ac315c5a: Pull complete
...
</code></pre></div></div>
<ul>
<li><a href="https://github.com/abiosoft/colima">https://github.com/abiosoft/colima</a></li>
</ul>
<p>Happy hacking! 😊</p>
leetcode2023-01-22T00:00:00+00:00http://javier.io/blog/en/2023/01/22/leetcode<h2 id="leetcode">leetcode</h2>
<h6 id="22-jan-2023">22 Jan 2023</h6>
<h3 id="easy">Easy</h3>
<ul>
<li><a href="https://leetcode.com/problems/two-sum/solutions/3087960/hash-table-cache-python/">1. Two Sum</a></li>
<li><a href="https://leetcode.com/problems/roman-to-integer/solutions/2960085/solved-in-a-similar-way-as-the-text-describes-the-problem-python/">13. Roman to Integer</a></li>
<li><a href="https://leetcode.com/problems/longest-common-prefix/solutions/2965450/user-friendly-solution-python/">14. Longest Common Prefix</a></li>
<li><a href="https://leetcode.com/problems/valid-parentheses/solutions/2967816/dictionary-solution-easy-to-understand-python/">20. Valid Parentheses</a></li>
<li><a href="https://leetcode.com/problems/merge-two-sorted-lists/solutions/2972798/dummy-variable-explained-python/">21. Merge Two Sorted Lists</a></li>
<li><a href="https://leetcode.com/problems/remove-duplicates-from-sorted-array/solutions/2973934/two-pointers-solution-slow-fast-python/">26. Remove Duplicates from Sorted Array</a></li>
<li><a href="https://leetcode.com/problems/plus-one/solutions/2976291/inverse-array-verification-python/">66. Plus One</a></li>
<li><a href="https://leetcode.com/problems/sqrtx/solutions/2981409/binary-search-finding-first-true-statement-in-false-true-list-python/">69. Sqrt(x)</a></li>
<li><a href="https://leetcode.com/problems/climbing-stairs/solutions/2996123/bottom-up-dp-python/">70. Climbing Stairs</a></li>
<li><a href="https://leetcode.com/problems/merge-sorted-array/solutions/2999149/two-pointers-python/">88. Merge Sorted Array</a></li>
<li><a href="https://leetcode.com/problems/binary-tree-inorder-traversal/solutions/3005027/all-dfs-traversals-preorder-inorder-postorder-python/">94. Binary Tree Inorder Traversal</a></li>
<li><a href="https://leetcode.com/problems/symmetric-tree/solutions/3012562/recursive-iterative-solution-python/">101. Symmetric Tree</a></li>
<li><a href="https://leetcode.com/problems/maximum-depth-of-binary-tree/solutions/3021923/max-depth-based-in-all-traversals-preorder-inorder-postorder-python/">104. Maximum Depth of Binary Tree</a></li>
<li><a href="https://leetcode.com/problems/convert-sorted-array-to-binary-search-tree/solutions/3032703/recursion-and-iterative-python/">108. Convert Sorted Array to Binary Search Tree</a></li>
<li><a href="https://leetcode.com/problems/pascals-triangle/solutions/3033373/initialize-pascal-array-with-1s-and-fill-efficiently-python/">118. Pascal’s Triangle</a></li>
<li><a href="https://leetcode.com/problems/best-time-to-buy-and-sell-stock/solutions/3047530/two-pointers-python/">121. Best Time to Buy and Sell Stock</a></li>
<li><a href="https://leetcode.com/problems/valid-palindrome/solutions/3056331/two-pointers-python/">125. Valid Palindrome</a></li>
<li><a href="https://leetcode.com/problems/single-number/solutions/3056816/dictionary-and-xor-python/">136. Single Number</a></li>
<li><a href="https://leetcode.com/problems/linked-list-cycle/solutions/3067337/slow-fast-pointers-python/">141. Linked List Cycle</a></li>
<li><a href="https://leetcode.com/problems/intersection-of-two-linked-lists/solutions/3069994/two-pointers-python/">160. Intersection of Two Linked Lists</a></li>
<li><a href="https://leetcode.com/problems/majority-element/solutions/3093065/boyer-moore-voting-algorithm-python/">169. Majority Element</a></li>
<li><a href="https://leetcode.com/problems/excel-sheet-column-number/solutions/3093166/base-26-conversion-python/">171. Excel Sheet Column Number</a></li>
<li><a href="https://leetcode.com/problems/reverse-bits/solutions/3118596/bit-manipulation-shifting-to-left-right-python/">190. Reverse Bits</a></li>
</ul>
<h3 id="medium">Medium</h3>
<h3 id="hard">Hard</h3>
<p>Happy interviewing!</p>
install tmux on windows 102022-11-15T00:00:00+00:00http://javier.io/blog/en/2022/11/15/install-tmux-on-windows-10<h2 id="install-tmux-on-windows-10">install tmux on windows 10</h2>
<h6 id="15-nov-2022">15 Nov 2022</h6>
<p>Download and install <a href="https://www.msys2.org/#installation">https://www.msys2.org/#installation</a></p>
<p>If behind a firewall, compress as gz or bz2, upload a custom server and download from it, eg:</p>
<ul>
<li><a href="http://f.javier.io/public/bin/msys2-x86_64-20221028.exe.gz">http://f.javier.io/public/bin/msys2-x86_64-20221028.exe.gz</a></li>
<li><a href="http://f.javier.io/public/bin/msys2-x86_64-20221028.exe.bz2">http://f.javier.io/public/bin/msys2-x86_64-20221028.exe.bz2</a></li>
</ul>
<h2 id="install-tmux">Install tmux</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pacman -S tmux
</code></pre></div></div>
<p>If behinf a firewall (SSL issue), modify <strong>/etc/pacman.conf</strong>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>>> XferCommand = /usr/bin/curl --insecure -L -C - -f -o %o %u
</code></pre></div></div>
<p>And Git:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git config --global http.sslVerify false
</code></pre></div></div>
<p>Copy tmux and dependencies to Git for Windows path:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cp C:\msys64\usr\bin\tmux C:\msys64\usr\bin\msys-event* C:\Program Files\Git\usr\bin
</code></pre></div></div>
<p>Restart Git for Windows</p>
<h2 id="extra">Extra</h2>
<p>If you want to keep msys2 you may want to change the <strong>$HOME</strong> directory so it points to the same place as Git for Windows, modify <strong>/etc/nsswitch.conf</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>db_home: windows
</code></pre></div></div>
<p>Happy hacking!</p>
install 7zip with choco without admin permissions2022-11-15T00:00:00+00:00http://javier.io/blog/en/2022/11/15/install-7zip-choco-non-admin<h2 id="install-7zip-with-choco-without-admin-permissions">install 7zip with choco without admin permissions</h2>
<h6 id="15-nov-2022">15 Nov 2022</h6>
<p>Install choco, from a powershell:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ notepad choco-non-admin.ps1
</code></pre></div></div>
<p>Then paste the following instructions:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Set directory for installation - Chocolatey does not lock
# down the directory if not the default
$InstallDir='C:\ProgramData\chocoportable'
$env:ChocolateyInstall="$InstallDir"
# If your PowerShell Execution policy is restrictive, you may
# not be able to get around that. Try setting your session to
# Bypass.
Set-ExecutionPolicy Bypass -Scope Process -Force;
# All install options - offline, proxy, etc at
# https://chocolatey.org/install
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
</code></pre></div></div>
<p>Execute the resulting script and install the portable 7zip version:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./choco-non-admin.ps1
$ choco install 7zip.portable
</code></pre></div></div>
<p>Open <strong>C:\ProgramData\chocoportable\lib\7zip.portable\tools\7zFM</strong> to compress / uncompress stuff.</p>
<p>Happy hacking!</p>
restart firefox2021-09-27T00:00:00+00:00http://javier.io/blog/en/2021/09/27/restart-linux<h2 id="restart-firefox">restart firefox</h2>
<h6 id="27-sep-2021">27 Sep 2021</h6>
<h3 id="issue">Issue</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Error: Firefox behaves weird, eg: session issues
</code></pre></div></div>
<h3 id="fix">Fix</h3>
<pre class="sh_sh">
about:restartrequired
</pre>
<p>That’s it, happy browsing, 😊</p>
citrix on linux2021-07-12T00:00:00+00:00http://javier.io/blog/en/2021/07/12/citrix-on-linux<h2 id="citrix-on-linux">citrix on linux</h2>
<h6 id="12-jul-2021">12 Jul 2021</h6>
<h3 id="issue">Issue</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Error: "SSL Error 61: You have not chosen to trust 'Certificate Authority'..." on Receiver for Linux
</code></pre></div></div>
<h3 id="fix">Fix</h3>
<pre class="sh_sh">
$ sudo mv /opt/Citrix/ICAClient/keystore/cacerts /opt/Citrix/ICAClient/keystore/cacerts.bk
$ sudo ln -s /etc/ssl/certs/ /opt/Citrix/ICAClient/keystore/cacerts
</pre>
<p>That’s it, happy coworking, 😊</p>
transfer files with netcat2021-01-17T00:00:00+00:00http://javier.io/blog/en/2021/01/17/transfer-files-with-netcat<h2 id="transfer-files-with-netcat">transfer files with netcat</h2>
<h6 id="17-jan-2021">17 Jan 2021</h6>
<p>Here goes a quick note about how to transfer files in a LAN between computers
using <strong>netcat</strong> which is available in a wide range of platforms because of its
simplicity.</p>
<h3 id="receiving-node-192168174">Receiving node (192.168.1.74)</h3>
<pre class="sh_sh">
$ mkdir backup/ && cd backup/
$ nc -l -p 7000 | pv (optional, pretty ETA visualizator) | tar x
</pre>
<h3 id="sending-node">Sending node</h3>
<pre class="sh_sh">
$ tar cf - * | nc 192.168.1.74 7000
</pre>
<p>That’s it, happy sharing, 😊</p>
host several sites in a single box with docker and traefik v2, https2020-12-03T00:00:00+00:00http://javier.io/blog/en/2020/12/03/host-several-sites-in-a-single-box-with-docker-and-traefik-https<h2 id="host-several-sites-in-a-single-box-with-docker-and-traefik-v2-https">host several sites in a single box with docker and traefik v2, https</h2>
<h6 id="03-dec-2020">03 Dec 2020</h6>
<p>Last time I wrote about how simple is to <a href="http://javier.io/blog/en/2020/12/01/host-several-sites-in-a-single-box-with-docker-and-traefik-http.html">host several sites with docker +
traefik on a single node</a>,
on this article I’ll complement such information with https and automatic ssl
certification renewal.</p>
<p>If you continue reading this you’ll <strong>need</strong> to get familiar with the previous
post, since I’m building upon it. OK, ready?, let’s recapitulate:</p>
<h2 id="diagram-and-folder-structure">Diagram and Folder Structure</h2>
<p><strong><a href="/assets/img/traefik-docker-compose.png"><img src="/assets/img/traefik-docker-compose.png" alt="" /></a></strong></p>
<p>Traefik will receive all requests and will send them to different containers
depending the domain/subdomains, in the process it’ll provide ssl termination
for our users and dockerized applications, those certifications will be
auto-renewed every 2/3 months and won’t require any manual step, cool!</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>┬
├── multisite (traefik)
│ ├── docker-compose.yml
│ ├── docker-compose.ssl.yml => NEW FILE
│ ├── acme.json => NEW FILE
├── site1.com
│ ├── ...
│ ├── docker-compose.site1.yml
│ ├── docker-compose.site1.ssl.yml => NEW FILE
├── site2.com
├── ...
├── docker-compose.site2.yml
├── docker-compose.site1.ssl.yml => NEW FILE
</code></pre></div></div>
<p>As you noticed, new files were added, the idea is that we maintain the
flexibility to either provision a <strong>http only</strong> or a <strong>http + https</strong> site.</p>
<h2 id="pre-requisite-dns-configuration">pre-requisite, dns configuration</h2>
<p>When working with <strong>http only</strong> there is no need to mv our code from our local
environment, it’s easy to add some entries in <strong>/etc/hosts</strong> and call it a day,
this time however is different, we need to <strong>upload our files into a box with
a public ip address</strong> and verify that the dns routing is working as expected,
that is, if we are going to hosts these sites at:
185.199.109.153, <strong>we need to make sure site1.com / site2.com resolve to
185.199.109.153</strong></p>
<p>I won’t cover how to do that because it depends on your DNS Registrar, for
reference I’m using <a href="https://www.racknerd.com/">RackNerd</a> and
<a href="https://www.dnspod.com/">DNSPod</a> as my Linux / DNS servers.</p>
<p>Why do we need to prepare our setup like this before starting?, it has to do
with <a href="https://letsencrypt.org/">Let’s Encrypt</a>, the Certification Authority
we’re going to depend on, this CA generates challenges to verify that we are
the <strong>owners of the referenced domain/subdomain</strong>, fortunately that happens
automatically so we don’t need to do anything besides making sure that Let’s
Encrypt can communicate with our domains.</p>
<h2 id="multisite">multisite</h2>
<p>Remember that starting here all changes are located in a remote public machine.
I’ll start by creating a copy of <strong>docker-compose.yml</strong>, that would make easier
for us to track the ssl changes:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cp docker-compose.yml docker-compose.ssl.yml
</code></pre></div></div>
<p><strong>multisite/docker-compose.ssl.yml.patch</strong>:</p>
<pre class="sh_diff">
--- docker-compose.ssl.yml 2020-12-03 10:02:48.186590271 -0600
+++ docker-compose.ssl.changes.yml 2020-12-03 10:03:30.940486004 -0600
@@ -9,11 +9,21 @@
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=traefik_global"
#- "--log.level=DEBUG"
+ - "--entrypoints.http.address=:80"
+ - "--entrypoints.https.address=:443"
+ - "--certificatesresolvers.myresolver.acme.email=your-personal@email.tld"
+ - "--certificatesresolvers.myresolver.acme.storage=/acme.json"
+ - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
ports:
- "80:80" #reverse proxy => input to all containerized services
+ - "443:443" #reverse ssl proxy
- "8080:8080" #traefik dashboard/api
volumes:
- /var/run/docker.sock:/var/run/docker.sock
+ # Run this command in the host machine before launching traefik:
+ # $ touch acme.json && chmod 600 acme.json
+ - ${PWD}/acme.json:/acme.json
networks:
- traefik
</pre>
<p>Now our original file is more verbose, nevertheless every option is there for a
reason.</p>
<p>By default traefik only opens the <strong>http port (80)</strong>, so if we want to allow
both, <strong>http/https</strong>, we need to be more specific:</p>
<pre class="sh_diff">
+ - "--entrypoints.http.address=:80"
+ - "--entrypoints.https.address=:443"
</pre>
<p>We also need to select which <em>certification resolver</em> we’re going to use, on
this case, Let’s encrypt, we specify that by filling the acme fields.</p>
<p><strong>acme.email</strong> can be any personal/business email. <strong>acme.storage</strong> is where
our ssl certificates will be saved, it does <strong>need to exists but can be
empty</strong>, if that is the case, traefik will override it with valid certs.
<strong>acme.tlschallenge</strong> is the challenge type, there are <a href="https://doc.traefik.io/traefik/https/acme/#the-different-acme-challenges">other
types</a>,
but I think this is the easiest.</p>
<pre class="sh_diff">
+ - "--certificatesresolvers.myresolver.acme.email=your-personal@email.tld"
+ - "--certificatesresolvers.myresolver.acme.storage=/acme.json"
+ - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
ports:
- "80:80" #reverse proxy => input to all containerized services
+ - "443:443" #reverse ssl proxy
</pre>
<p>Finally we’ll share the <strong>acme.json</strong> file between our host/container to avoid
requesting new certificates each time we launch our traefik container.</p>
<pre class="sh_diff">
+ # Run this command in the host machine before launching traefik:
+ # $ touch acme.json && chmod 600 acme.json
+ - ${PWD}/acme.json:/acme.json
</pre>
<p>As the comments suggest, this file needs to be created with specific
permissions before running traefik.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ touch acme.json && chmod 600 acme.json
</code></pre></div></div>
<p>Ok, that’s all on traefik side, let’s apply the patch:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ patch -p0 < docker-compose.ssl.yml.patch
patching file docker-compose.ssl.yml
</code></pre></div></div>
<h2 id="site1com">site1.com</h2>
<p>Let’s copy and analyze what makes a new site https compatible:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cp docker-compose.site1.yml docker-compose.site1.ssl.yml
</code></pre></div></div>
<p><strong>site1.com/docker-compose.site1.ssl.yml.patch</strong>:</p>
<pre class="sh_diff">
--- docker-compose.site1.ssl.yml 2020-12-03 10:02:48.186590271 -0600
+++ docker-compose.site1.ssl.changes.yml 2020-12-03 10:03:30.940486004 -0600
@@ -28,7 +28,15 @@
- frontend
labels:
- "traefik.enable=true"
- - "traefik.http.routers.site1_com.rule=Host(`site1.com`)"
+
+ - "traefik.http.routers.http_site1_com.rule=Host(`site1.com`)"
+ - "traefik.http.routers.http_site1_com.entrypoints=http"
+
+ - "traefik.http.routers.https_site1_com.rule=Host(`site1.com`)"
+ - "traefik.http.routers.https_site1_com.entrypoints=https"
+ - "traefik.http.routers.https_site1_com.tls=true"
+ - "traefik.http.routers.https_site1_com.tls.certresolver=myresolver"
+
- "traefik.http.services.site1_com.loadbalancer.server.port=80"
app:
</pre>
<p>I don’t know about you, but for me the syntax is confusing, happily this only
needs to be setup once and then can be reused in other domains/subdomains by
changing only some words. Also, the ssl endpoint is transparent, our
application doesn’t need to be aware of it, that’s great and IMO overrides the
verbose configuration.</p>
<p>As you noticed, the <strong>site1_com</strong> rules were split in two, <strong>http_site1_com</strong>
and <strong>https_site1_com</strong>, this is because each route needs to define a Host and
an entrypoint (port), repetitive right?, in the <strong>https</strong> route we enable
<strong>tls</strong> and point to our custom resolver <strong>myresolver</strong> which if we recall uses
let’s encrypt. There is also one more detail:</p>
<pre class="sh_diff">
- "traefik.http.services.site1_com.loadbalancer.server.port=80"
</pre>
<p>The service element redirects traefik routes to our app’s 80 port, as each
route defines a domain and entrypoint, that means it affects the domain as a
whole and therefore it’s kept as <strong>site1_com</strong> , @.@!</p>
<p>This configuration leaves an important use case that each year is more common,
forcing users to use <strong>https</strong> over <strong>http</strong>. Since I personally do not agree
with such IMO abusive behavior, I skipped it on purpose, however if you’re
interested you can use a
<a href="https://doc.traefik.io/traefik/middlewares/redirectscheme/">middleware</a> to
configure that.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ patch -p0 < docker-compose.site1.yml.patch
patching file docker-compose.site1.yml
</code></pre></div></div>
<h2 id="site2com">site2.com</h2>
<p>The second site should be easier to review:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cp docker-compose.site2.yml docker-compose.site2.ssl.yml
</code></pre></div></div>
<p><strong>site2.com/docker-compose.site2.ssl.yml.patch</strong>:</p>
<pre class="sh_diff">
--- docker-compose.site2.ssl.yml 2020-12-03 10:02:48.186590271 -0600
+++ docker-compose.site2.ssl.changes.yml 2020-12-03 10:03:30.940486004 -0600
@@ -26,7 +26,15 @@
- frontend
labels:
- "traefik.enable=true"
- - "traefik.http.routers.site2_com.rule=Host(`site2.com`)"
+
+ - "traefik.http.routers.http_site2_com.rule=Host(`site2.com`)"
+ - "traefik.http.routers.http_site2_com.entrypoints=http"
+
+ - "traefik.http.routers.https_site2_com.rule=Host(`site2.com`)"
+ - "traefik.http.routers.https_site2_com.entrypoints=https"
+ - "traefik.http.routers.https_site2_com.tls=true"
+ - "traefik.http.routers.https_site2_com.tls.certresolver=myresolver"
+
- "traefik.http.services.site2_com.loadbalancer.server.port=80"
</pre>
<p>Everything is the same, the only difference is that <strong>site1</strong> was replaced with
<strong>site2</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ patch -p0 < docker-compose.site2.yml.patch
patching file docker-compose.site2.yml
</code></pre></div></div>
<h2 id="docker-compose-up">docker-compose up</h2>
<p>If you’ve followed everything up until this point, <strong>congratulations!</strong>,
technology is great but also tends to be harder to grasp as more elements are
incorporated, let’s end this tutorial once for all so we can continue with our
lives:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd multisite/ && docker-compose -f docker-compose.ssl.yml up -d
$ cd site1.com/ && docker-compose -f docker-compose.site1.ssl.yml up -d
$ cd site2.com/ && docker-compose -f docker-compose.site2.ssl.yml up -d
</code></pre></div></div>
<p>That’s it!, a kind of simple setup with ssl certs that is only limited by
the amount of <strong>RAM/CPU</strong> in your machine:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ curl https://site1.com
hello world from 7b7d6302-e162-3806-9595-17f854dd5b98
$ curl https://site2.com; echo
{"Greetings": "Hello World!"}
</code></pre></div></div>
<p>Happy hacking!</p>
<ul>
<li><a href="http://javier.io/blog/en/2020/12/01/host-several-sites-in-a-single-box-with-docker-and-traefik-http.html">http://javier.io/blog/en/2020/12/01/host-several-sites-in-a-single-box-with-docker-and-traefik-http.html</a></li>
<li><a href="https://traefik.io/blog/traefik-2-0-docker-101-fc2893944b9d/">https://github.com/traefik/traefik/issues/5506#issuecomment-549100716</a></li>
</ul>
host several sites in a single box with docker and traefik v2, http2020-12-01T00:00:00+00:00http://javier.io/blog/en/2020/12/01/host-several-sites-in-a-single-box-with-docker-and-traefik-http<h2 id="host-several-sites-in-a-single-box-with-docker-and-traefik-v2-http">host several sites in a single box with docker and traefik v2, http</h2>
<h6 id="01-dec-2020">01 Dec 2020</h6>
<p><a href="https://www.docker.com/">Docker</a> is great and everything, however one of the
things that still stress me is how to deploy it to production. I don’t need
fancy stuff, nor I want to spend my free time or money hosting small/personal
projects. I just want to be able to <strong>docker-compose up</strong> and forget. And if
several projects can share a single node while maintaining their own
domain/subdomain even better.</p>
<p>Today, after completing another small service, I decided I had enough, I
reviewed several alternatives and finally found a sensitive one.
<a href="https://traefik.io/">Traefik</a>, a simple/yet powerful reverse proxy that is
compatible with Docker/Kubernetes/Blablabla and that just works. So here goes,
my own tutorial for future me and other in pain souls.</p>
<h2 id="diagram-and-folder-structure">Diagram and Folder Structure</h2>
<p><strong><a href="/assets/img/traefik-docker-compose.png"><img src="/assets/img/traefik-docker-compose.png" alt="" /></a></strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>┬
├── multisite (traefik)
│ ├── docker-compose.yml
├── site1.com
│ ├── ...
│ ├── docker-compose.site1.yml
├── site2.com
├── ...
├── docker-compose.site2.yml
</code></pre></div></div>
<h2 id="multisite">multisite</h2>
<p><strong>multisite/docker-compose.yml</strong> contains the core of the setup:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>version: '3.4'
services:
traefik:
image: traefik:v2.3
command:
- "--api.insecure=true"
- "--providers.docker"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=traefik_global"
ports:
- "80:80" #all traffic will arrive through this port
- "8080:8080" #traefik dashboard/api
volumes: #this is how traefik reads docker events
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
networks:
traefik:
name: traefik_global
</code></pre></div></div>
<p>It’s amazing how simple it can get, let’s review some of its sections:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>--api.insecure=true
</code></pre></div></div>
<p>Activate traefik’s dashboard/api (should be disabled or protected in production
systems), <a href="http://localhost:8080">http://localhost:8080</a> and
<a href="http://localhost:8080/api/rawdata">http://localhost:8080/api/rawdata</a>,
personally the latter was more useful to me, helped me to debug route
mismatches.</p>
<p><strong><a href="/assets/img/traefik-dashboard.png"><img src="/assets/img/traefik-dashboard.png" alt="" /></a></strong></p>
<p>Traefik is able to autoconfigure its routing from Docker events/data, that is
great but if you don’t want to end with dozens of routes because of auxiliary
services it’s better to only allow specific ones, maybe only service’s
front-ends?</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>--providers.docker
--providers.docker.exposedbydefault=false
</code></pre></div></div>
<p>By default, <strong>docker-compose</strong> creates volumes/networks based in the project
folder name, that’s to protect from service collision, for our traefik case
however we need a global network that can be referred in multiple scenarios,
that’s what the <strong>name:</strong> parameter does, the <strong>providers.docker.network</strong>
option gives to Traefik the instruction to route all its traffic through this
interface, if not defined here every service would need to do it in its own
<strong>docker-compose.yml</strong> file.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>--providers.docker.network=traefik_global
networks:
traefik:
name: traefik_global
</code></pre></div></div>
<h2 id="site1com">site1.com</h2>
<p>In order to make a realistic scenario I’m going to use small applications that
although simple contain enough complexity to mirror real world cases:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone https://github.com/nebulosa/docker-flask-hello-world-mongodb site1.com
$ cd site1.com/
$ git checkout 0de86a2
</code></pre></div></div>
<p><strong><a href="/assets/img/traefik-tier-3-app.png"><img src="/assets/img/traefik-tier-3-app.png" alt="" /></a></strong></p>
<p>The above image doesn’t consider docker, yet helps to describe how a common web
application works, once we take in account containers / subnets we would arrive
to the following diagram:</p>
<p><strong><a href="/assets/img/traefik-tier-3-dockerized-app.png"><img src="/assets/img/traefik-tier-3-dockerized-app.png" alt="" /></a></strong></p>
<p>As you can see, the only container that can communicate with both, <strong>frontend</strong>
and <strong>database</strong> is the <strong>app</strong>, this is just good practices, in our final
setup, an additional <strong>traefik_global</strong> network would be added, it will connect
every front-end web container to send/receive requests while the rest of the
service stack is hidden in its own namespace, simple/elegant and easy to scale.</p>
<p>I’ll create a copy of <strong>docker-compose-cherry.yml</strong> and apply a patch to
showcase how traefik connection works:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cp docker-compose-cherry.yml docker-compose.site1.yml
</code></pre></div></div>
<p><strong>site1.com/docker-compose.site1.yml.patch</strong>:</p>
<pre class="sh_diff">
--- docker-compose.site1.yml 2020-12-01 10:02:48.186590271 -0600
+++ docker-compose.site1.changes.yml 2020-12-01 10:03:30.940486004 -0600
@@ -18,15 +18,18 @@
nginx:
image: nginx:1.13.10-alpine
- ports:
- - "5000:80"
volumes:
- ./nginx/default/:/etc/nginx/conf.d
- /etc/localtime:/etc/localtime:ro
depends_on:
- app
networks:
+ - traefik #add 1st so traefik performs better
- frontend
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.site1_com.rule=Host(`site1.com`)"
+ - "traefik.http.services.site1_com.loadbalancer.server.port=80"
app:
build: .
@@ -52,3 +55,6 @@
driver: bridge #or overlay in swarm mode
backend:
driver: bridge #or overlay in swarm mode
+ traefik:
+ external:
+ name: traefik_global
</pre>
<p>Since all our traffic would pass through localhost:80/traefik there is no need
to expose/bind additional ports:</p>
<pre class="sh_diff">
nginx:
image: nginx:1.13.10-alpine
- ports:
- - "5000:80"
</pre>
<p>The front-end container, <strong>nginx</strong> on this case, is connected to the global
traefik network.</p>
<pre class="sh_diff">
networks:
+ - traefik #add 1st so traefik performs better
- frontend
+ traefik:
+ external:
+ name: traefik_global
</pre>
<p>Only the <strong>nginx</strong> container is announced to traefik, <strong>enable=true</strong>, it
will respond to the <strong>site1.com</strong> domain and will be available in the
local port <strong>80</strong> (<strong>grep “listen” nginx/default/default.conf</strong>), an
important step is to verify that the <strong>routers/services id</strong> is unique, on
this case <strong>site1_com</strong>:</p>
<pre class="sh_diff">
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.site1_com.rule=Host(`site1.com`)"
+ - "traefik.http.services.site1_com.loadbalancer.server.port=80"
</pre>
<p>That’s all for a basic setup, I’ll add https/automatic SSL renovation in a
future article, let’s apply the patch:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ patch -p0 < docker-compose.site1.yml.patch
patching file docker-compose.site1.yml
</code></pre></div></div>
<h2 id="site2com">site2.com</h2>
<p>The second site is an API, really simple but also with its own
database and nginx containers.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone https://github.com/nebulosa/flask-api-rest site2.com
$ cd site2.com/
$ git checkout 9489597
</code></pre></div></div>
<p>I’ll also create a copy of <strong>docker-compose-cherry.yml</strong> and apply a patch similar
to the previous one:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cp docker-compose-cherry.yml docker-compose.site2.yml
</code></pre></div></div>
<p><strong>site2.com/docker-compose.site2.yml.patch</strong>:</p>
<pre class="sh_diff">
--- docker-compose.site2.yml 2020-12-01 10:02:48.186590271 -0600
+++ docker-compose.site2.changes.yml 2020-12-01 10:03:30.940486004 -0600
@@ -17,14 +17,17 @@
nginx:
image: nginx:1.13.10-alpine
- ports:
- - "5000:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- app
networks:
+ - traefik #add 1st so traefik performs better
- frontend
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.site2_com.rule=Host(`site2.com`)"
+ - "traefik.http.services.site2_com.loadbalancer.server.port=80"
app:
build: .
@@ -46,3 +49,6 @@
driver: bridge #or overlay in swarm mode
backend:
driver: bridge #or overlay in swarm mode
+ traefik:
+ external:
+ name: traefik_global
</pre>
<p>As you notice, all changes are the same except for:</p>
<pre class="sh_diff">
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.site2_com.rule=Host(`site2.com`)"
+ - "traefik.http.services.site2_com.loadbalancer.server.port=80"
</pre>
<p>This time, the routers/services id is <strong>site2_com</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ patch -p0 < docker-compose.site2.yml.patch
patching file docker-compose.site2.yml
</code></pre></div></div>
<h2 id="docker-compose-up">docker-compose up</h2>
<p>One previous step I’m going to do before launching everything is edit
<strong>/etc/hosts</strong>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
127.0.0.1 site1.com
127.0.0.1 site2.com
</code></pre></div></div>
<p>That will help me test the sites locally, ok, let’s end this tutorial:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd multisite/ && docker-compose up -d
$ cd site1.com/ && docker-compose -f docker-compose.site1.yml up -d
$ cd site2.com/ && docker-compose -f docker-compose.site2.yml up -d
</code></pre></div></div>
<p>That’s it!, a simple setup that is only limited by the amount of
<strong>RAM/CPU</strong> in your machine:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ curl site1.com
hello world from 7b7d6302-e162-3806-9595-17f854dd5b98
$ curl site2.com; echo
{"Greetings": "Hello World!"}
</code></pre></div></div>
<p>Happy hacking!</p>
<ul>
<li><a href="https://traefik.io/blog/traefik-2-0-docker-101-fc2893944b9d/">https://traefik.io/blog/traefik-2-0-docker-101-fc2893944b9d/</a></li>
</ul>
on the nature of daylight, una composición de max richter2020-09-11T00:00:00+00:00http://javier.io/blog/es/2020/09/11/on-the-nature-of-daylight-una-composicion-de-max-richter<h2 id="on-the-nature-of-daylight-una-composición-de-max-richter">on the nature of daylight, una composición de max richter</h2>
<h6 id="11-sep-2020">11 Sep 2020</h6>
<p>Agradecimiento a <a href="https://es.wikipedia.org/wiki/Max_Richter">aquellos</a> que son capaces a través de las artes de expresar nuestros sentimientos.</p>
<div id="youtube">
<iframe width="560" height="315" src="https://www.youtube.com/embed/4J8hV_8a8y0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</center>
</div>
python: sobrecarga de funciones, multimethod2020-05-08T00:00:00+00:00http://javier.io/blog/es/2020/05/08/python-sobrecarga-de-funciones-multimethod<h2 id="python-sobrecarga-de-funciones-multimethod">python: sobrecarga de funciones, multimethod</h2>
<h6 id="08-may-2020">08 May 2020</h6>
<p>Uno de los <strong>“defectos”</strong> más molestos de python para mi es su falta de soporte
para sobrecargar funciones, es estos días con motivo de un nuevo proyecto me
tocó volver a revisar el tema, encontré algunas soluciones caseras, un nuevo
decorador ‘singledispatch’ y varias librerías, después de hacer algunos
experimentos por fin encontré una implementación lo suficientemente robusta
para zanjar el tema, lo comparto por si alguien trae el mismo hormigueo:</p>
<ul>
<li><a href="https://pypi.org/project/multimethod/">https://pypi.org/project/multimethod/</a></li>
</ul>
<p><strong><a href="/assets/img/python-sobrecarga-de-funciones-multimethod-example.png"><img src="/assets/img/python-sobrecarga-de-funciones-multimethod-example.png" alt="" /></a></strong></p>
<p>Listo, feliz multipass 😋</p>
<p><strong><a href="/assets/img/python-sobrecarga-de-funciones-multimethod.png"><img src="/assets/img/python-sobrecarga-de-funciones-multimethod.png" alt="" /></a></strong></p>
el amor, las mujeres y la vida2020-02-28T00:00:00+00:00http://javier.io/blog/es/2020/02/28/el-amor-las-mujeres-y-la-vida<h2 id="el-amor-las-mujeres-y-la-vida">el amor, las mujeres y la vida</h2>
<h6 id="28-feb-2020">28 Feb 2020</h6>
<p>A través de la poesía nos reconocemos humanos y que mejor que el desamor para
saborear estos versos.</p>
<h3 id="es-tan-poco">Es tan poco</h3>
<pre class="lyric">
Lo que conoces es tan poco
lo que conoces de mí
lo que conoces son mis nubes
son mis silencios, son mis gestos
lo que conoces es la tristeza de mi casa vista de afuera
son los postigos de mi tristeza
el llamador de mi tristeza.
Pero no sabes nada
a lo sumo, piensas a veces
que es tan poco lo que conozco de ti
lo que conozco o sea tus nubes
o tus silencios, o tus gestos
lo que conozco es la tristeza de tu casa vista de afuera
son los postigos de tu tristeza
el llamador de tu tristeza.
Pero no llamas
Pero no llamo.
</pre>
<h3 id="ella-que-pasa">Ella que pasa</h3>
<pre class="lyric">
Paso que pasa
rostro que pasabas
qué más quieres, te miro
después me olvidaré
después y solo
solo y después
seguro que me olvido.
Paso que pasas
rostro que pasabas
qué más quieres, te quiero
te quiero sólo dos o tres minutos
para conocerte más no tengo tiempo.
Paso que pasas
rostro que pasabas
qué más quieres, ay no
ay no me tientes
que si nos tentamos
no nos podremos olvidar
adiós.
</pre>
<h3 id="ustedes-y-nosotros">Ustedes y nosotros</h3>
<pre class="lyric">
Ustedes cuando aman exigen bienestar
una cama de cedro y un colchón especial
Nosotros cuando amamos es fácil de arreglar
con sábanas qué bueno sin sábanas da igual
Ustedes cuando aman calculan interés
y cuando se desaman calculan otra vez.
Nosotros cuando amamos es como renacer
y si nos desamamos no la pasamos bien.
Ustedes cuando aman son de otra magnitud
hay fotos chismes prensa y el amor es un boom.
Nosotros cuando amamos es un amor común
tan simple y tan sabroso como tener salud.
Ustedes cuando aman consultan el reloj
porque el tiempo que pierden vale medio millón.
Nosotros cuando amamos sin prisa y con fervor
gozamos y nos sale barata la función.
Ustedes cuando aman al analista van
él es quien dictamina si lo hacen bien o mal.
Nosotros cuando amamos sin tanta cortedad
el subconsciente piola se pone a disfrutar.
Ustedes cuando aman exigen bienestar
una cama de cedro y un colchón especial
Nosotros cuando amamos es fácil de arreglar
con sábanas qué bueno sin sábanas da igual.
</pre>
<h3 id="hagamos-un-trato">Hagamos un trato</h3>
<pre class="lyric">
Compañera usted sabe puede contar conmigo
no hasta dos o hasta diez sino contar conmigo.
Si alguna vez advierte que la miro a los ojos
y una veta de amor reconoce en los míos
no alerte sus fusiles, ni piense qué delirio
a pesar de la veta o tal vez porque existe
usted puede contar conmigo.
Si otras veces me encuentra huraño sin motivo
no piense qué flojera, igual puede contar conmigo.
Pero hagamos un trato
yo quisiera contar con usted
es tan lindo saber que usted existe
uno se siente vivo y cuando digo esto
quiero decir contar
aunque sea hasta dos, aunque sea hasta cinco
no ya para que acuda presurosa en mi auxilio
sino para saber a ciencia cierta
que usted sabe que puede contar conmigo.
</pre>
<h3 id="a-la-izquierda-del-roble">A la izquierda del roble</h3>
<pre class="lyric">
No sé si alguna vez les ha pasado a ustedes
pero el Jardín Botánico siempre ha tenido
una agradable propensión a los sueños
a que los insectos suban por las piernas
y la melancolía baje por los brazos
hasta que uno cierra los puños y la atrapa
después de todo el secreto es mirar hacia arriba
y ver cómo las nubes se disputan las copas
y ver cómo los nidos se disputan los pájaros.
No sé si alguna vez les ha pasado a ustedes
pero puede ocurrir que de pronto uno advierta
uno de esos amores de tántalo y azar
que Dios no admite porque tiene celos.
Fíjense que él acusa con ternura
y ella se apoya contra la corteza
fíjense que él va tildando recuerdos
y ella se consterna misteriosamente
para mí que el muchacho está diciendo
lo que se dice a veces en el Jardín Botánico.
Vos lo dijiste, nuestro amor
fue desde siempre un niño muerto
sólo de a ratos parecía que iba a vivir
que iba a vencernos
pero los dos fuimos tan fuertes
que lo dejamos sin su sangre
sin su futuro, sin su cielo
un niño muerto sólo eso
maravilloso y condenado
quizá tuviera una sonrisa como la tuya
dulce y honda, quizá tuviera un alma triste
como mi alma, poca cosa
quizá aprendiera con el tiempo
a desplegarse a usar el mundo
pero los niños que así vienen
muertos de amor, muertos de miedo
tienen tan grande el corazón
que se destruyen sin saberlo.
Vos lo dijiste, nuestro amor
fue desde siempre un niño, un niño muerto.
</pre>
<h3 id="soledades">Soledades</h3>
<pre class="lyric">
Ellos tienen razón
esa felicidad al menos con mayúscula, no existe
ah pero si existiera con minúscula
seria semejante a nuestra breve presoledad
Después de la alegría, viene la soledad
después de la plenitud, viene la soledad
después del amor, viene la soledad.
Ya sé que es una pobre deformación
pero lo cierto es que en ese durable minuto
uno se siente solo en el mundo.
Después de la alegría, después de la plenitud
después del amor, viene la soledad
conforme, ¿pero qué vendrá después de la soledad?
A veces no me siento tan solo
si imagino, mejor dicho si sé
que más allá de mi soledad y de la tuya
otra vez estas vos
aunque sea preguntándote a solas
que vendrá después de la soledad.
</pre>
<p><br /></p>
compartiendo archivos cifrados x internet2019-08-05T00:00:00+00:00http://javier.io/blog/es/2019/08/05/compartir-archivos-cifrados-x-internet<h2 id="compartiendo-archivos-cifrados-x-internet">compartiendo archivos cifrados x internet</h2>
<h6 id="05-aug-2019">05 Aug 2019</h6>
<p>Un archivo cifrado es aquel que ha sido ofuscado mediante algoritmos públicos,
sólo las personas con la llave secreta pueden acceder a ellos.</p>
<h3 id="cifrar">Cifrar</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gpg -v --cipher-algo AES256 --symmetric IMAGEN.PNG
# pregunta contraseña 2 veces
</code></pre></div></div>
<p>Un archivo <strong>IMAGEN.PNG.gpg</strong> es generado, este es el que debemos pasar al
contacto junto con la contraseña</p>
<p><strong>OJO: En el ejemplo anterior se ha usado una IMAGEN.PNG pero puede usarse con
cualquier tipo de archivo</strong></p>
<h3 id="descifrar">Descifrar</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gpg -v --decrypt IMAGEN.PNG.gpg > IMAGEN.PNG
# pregunta contraseña 2 veces
</code></pre></div></div>
<p>Un archivo <strong>IMAGEN.PNG</strong> es generado el cual puede abrirse con cualquier visor
de imagenes.</p>
<h2 id="seguridad-adicional-estenografía">Seguridad adicional, estenografía</h2>
<p>La esteneografía es la técnica de esconder mensajes / datos dentro de otros,
por ejemplo almacenar archivos cifrados dentro de canciones mp3.</p>
<h3 id="instalación">Instalación</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ wget https://raw.githubusercontent.com/javier-lopez/learn/master/sh/dockerized/hideme.dockerized
$ chmod +x hideme.dockerized
$ sudo mv hideme.dockerized /usr/bin/hideme.dockerized
</code></pre></div></div>
<h3 id="esconder-datos-en-archivos-de-música">Esconder datos en archivos de música</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hideme.dockerized ARCHIVO.MP3 IMAGEN.PNG.gpg
</code></pre></div></div>
<p>Un archivo <strong>output.mp3</strong> es generado, este es el archivo que debemos pasar al
contacto junto con la contraseña del archivo <strong>IMAGEN.PNG.gpg</strong></p>
<h3 id="descubrir-datos-en-archivos-de-música">Descubrir datos en archivos de música</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ hideme.dockerized output.mp3 -f
</code></pre></div></div>
<p>Un archivo <strong>output.U</strong> es generado, este debe renombrarse al archivo original, por ejemplo:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ mv output.U IMAGEN.PNG.gpg
</code></pre></div></div>
<p>Y descifrarse en caso de ser necesario:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gpg -v --decrypt IMAGEN.PNG.gpg > IMAGEN.PNG
# pregunta contraseña 2 veces
</code></pre></div></div>
<p>Listo, feliz transmisión de secretos 😋</p>
lvm cheatsheet2019-03-16T00:00:00+00:00http://javier.io/blog/en/2019/03/16/lvm-cheatsheet<h2 id="lvm-cheatsheet">lvm cheatsheet</h2>
<h6 id="16-mar-2019">16 Mar 2019</h6>
<p>There are certain technical things I keep forgetting no matter how many times
I try them, <code class="language-plaintext highlighter-rouge">ln</code> usage, <code class="language-plaintext highlighter-rouge">git</code> parameters and the reason for this post, <code class="language-plaintext highlighter-rouge">LVM</code>.
So, here goes a quick how-to for my future me.</p>
<h3 id="basics">Basics</h3>
<p>In order to understand <strong>LVM</strong> it’s required to grasp its components.</p>
<h2 id="physical-volume-pv">Physical Volume (PV)</h2>
<p>A PV is <strong>any block device</strong> that can be used as storage</p>
<p><strong><a href="/assets/img/lvm_pv.png"><img src="/assets/img/lvm_pv.png" alt="" /></a></strong></p>
<h2 id="volume-group-vg">Volume Group (VG)</h2>
<p>A VG is a group of at least one PV, commonly contains many thought.</p>
<p><strong><a href="/assets/img/lvm_vg.png"><img src="/assets/img/lvm_vg.png" alt="" /></a></strong></p>
<h2 id="logical-volume-lv">Logical Volume (LV)</h2>
<p>A LV is a portion (partition) of a VG.</p>
<p><strong><a href="/assets/img/lvm_lv.png"><img src="/assets/img/lvm_lv.png" alt="" /></a></strong></p>
<h3 id="how-to-set-up-multiple-hard-drives-as-one-volume">How to set up multiple hard drives as one volume?</h3>
<p><strong>Define /dev/sda, /dev/sdb2 and /dev/sdc3 as PVs</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo pvcreate /dev/sda /dev/sdb2 /dev/sdc3
</code></pre></div></div>
<p><strong>Create a Volume Group (VG) out of three just defined PVs</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo vgcreate vg_name /dev/sda /dev/sdb2 /dev/sdc3
</code></pre></div></div>
<p><strong>Create a Logical Volume (LV) out of the just defined VG</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo lvcreate -l 100%FREE -n lv_name vg_name
</code></pre></div></div>
<p>Done!, now it can be formated and mounted as a normal HD, eg:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo mkfs.ext4 /dev/vg_name/lv_name
$ echo '/dev/vg_name/lv_name /mount_point ext4 defaults 0 0' | sudo tee -a /etc/fstab
$ sudo mount -a
</code></pre></div></div>
<h3 id="how-to-mount-a-previously-defined-lvm-volume">How to mount a previously defined LVM volume</h3>
<p><strong>Recreate /dev/ LVM partitions</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo vgchange -ay
</code></pre></div></div>
<p>Done!, now it can be formated and mounted as a normal HD, eg:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo mkfs.ext4 /dev/vg_name/lv_name
$ echo '/dev/vg_name/lv_name /mount_point ext4 defaults 0 0' | sudo tee -a /etc/fstab
$ sudo mount -a
</code></pre></div></div>
<p>That’s it!, I’ll keep adding <strong>LVM</strong> recipes as I find fit, happy storing,
😊</p>
<ul>
<li><a href="https://askubuntu.com/questions/7002/how-to-set-up-multiple-hard-drives-as-one-volume">https://askubuntu.com/questions/7002/how-to-set-up-multiple-hard-drives-as-one-volume</a></li>
<li><a href="https://www.digitalocean.com/community/tutorials/how-to-use-lvm-to-manage-storage-devices-on-ubuntu-16-04">https://www.digitalocean.com/community/tutorials/how-to-use-lvm-to-manage-storage-devices-on-ubuntu-16-04</a></li>
<li><a href="https://blog.inittab.org/administracion-sistemas/lvm-para-torpes-i/">https://blog.inittab.org/administracion-sistemas/lvm-para-torpes-i/</a></li>
</ul>
how to keep your Git-Fork up to date2018-12-29T00:00:00+00:00http://javier.io/blog/en/2018/12/29/how-to-keep-your-github-fork-up-to-date<h2 id="how-to-keep-your-git-fork-up-to-date">how to keep your Git-Fork up to date</h2>
<h6 id="29-dec-2018">29 Dec 2018</h6>
<p>When it comes to the situation that you fork a repository and you contribute to
it, then it could happen that your fork and the upstream are not in sync
anymore. So the goal is, that you get a current version of the upstream
repository and then you can merge the new changes into your fork, right? Okay!
Let’s get started.</p>
<h3 id="1-create-a-fork">1. Create a Fork</h3>
<p>A fork <a href="https://help.github.com/articles/fork-a-repo/">is a copy of someone others repository in your
account</a>, which can be an
independent development project, This tutorial is for GitHub but works for any
other git hosted platform, like Bitbucket or GitLab.</p>
<p><strong><a href="/assets/img/fork_button.jpg"><img src="/assets/img/fork_button.jpg" alt="" /></a></strong></p>
<h3 id="2-clone-the-fork">2. Clone the fork</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone git@github.com:your-user/your-fork.git
</code></pre></div></div>
<h3 id="3-add-the-upstream">3. Add the upstream</h3>
<p>Now we should add the “upstream” branch. You can call it however you want.
Upstream is just best practice.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git remote add upstream git://github.com/original-author/original-project.git
</code></pre></div></div>
<p>If you now have a look at your remote urls, you should see the following:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git remote -v
origin https://github.com/your-user/your-fork (fetch)
origin https://github.com/your-user/your-fork (push)
upstream https://github.com/original-author/original-project (fetch)
upstream https://github.com/original-author/original-project (push)
</code></pre></div></div>
<h3 id="4-keep-the-upstream-updated">4. Keep the upstream updated</h3>
<p>Now as we have both urls tracked, we can update the two sources independently. With</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git fetch upstream
</code></pre></div></div>
<h3 id="5-mergerebase-your-work-with-the-upstream-repository">5. Merge/Rebase your work with the upstream repository</h3>
<p>Then you can just merge the changes.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git merge upstream/master master
</code></pre></div></div>
<p>With that, you merge the latest changes from the master branch of the upstream
into your local master branch. If you like, you can also use git pull, which is
nothing else than fetching and merging in one step.</p>
<p><strong>Pro Tip:</strong> The best way in my eyes is, to rebase because that fetches the
latest changes of the upstream branch and replay your work on top of that. Here
is, how it works:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git rebase upstream/master
</code></pre></div></div>
<h3 id="6-push-your-changes-online">6. Push your changes online</h3>
<p>Finally, you can push your changes so others can benefit</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git push
</code></pre></div></div>
<p>That’s it, happy coding, 😊</p>
carteras fuera de línea para criptomonedas2018-11-15T00:00:00+00:00http://javier.io/blog/es/2018/11/15/carteras-fuera-de-linea-para-criptomonedas<h2 id="carteras-fuera-de-línea-para-criptomonedas">carteras fuera de línea para criptomonedas</h2>
<h6 id="15-nov-2018">15 Nov 2018</h6>
<p>Las carteras para criptomonedas son piezas de software que dan acceso a los
<a href="https://es.wikipedia.org/wiki/Cadena_de_bloques">blockchains</a> de Bitcoin,
Ethereum, etc, <strong>cada criptomoneda tiene su propia cartera</strong>, asi una cartera
para Bitcoin solo funcionará para ese protocolo. Y si se tienen <code class="language-plaintext highlighter-rouge">N</code>
criptomonedas, se requeriran la misma cantidad de carteras para administrar los
fondos.</p>
<p>También existen programas que pueden acceder a varios protocolos al mismo
tiempo, las multicarteras, sin embargo esos programas estan desarrollados por
terceras partes, por lo que tienden a ser menos confiables, por tal razón en
este artículo se hará enfásis en la utilización de las carteras oficiales.</p>
<p>Para usar una cartera se requieren dos cosas, una clave pública y una privada.</p>
<p><strong>Clave pública</strong>: ésta es la dirección de la cartera. Es muy parecido a un
número de cuenta bancaria, ya que sólo se puede usar para enviar dinero a una
cuenta.</p>
<p><strong>Clave privada</strong>: ésta es la información que permite controlar los fondos de
la cuenta. Por lo cual debe mantenerse 100% secreta y segura, <strong>si se pierde
esta clave se pierden los fondos</strong>.</p>
<p><strong>OJO</strong>: Una llave pública puede obtenerse a partir de una llave privada, pero
al revés no. Por lo que en teoría sólo se podría almacenar la llave privada.
Sin embargo, la llave pública a menudo es útil para solicitar fondos a terceros
sin arriesgarse a comprometer la llave privada, por lo que es conveniente tener
ambas a la mano.</p>
<p>Cuando los fondos se mantienen en un exchange, como <a href="">bitso</a>, <a href="">binance</a>,
etc, dichos sitios son los únicos que conocen ambas llaves, de ahi que puedan
comprometerse los recursos si el sitio es hackeado, o que el exchange pueda
tomar decisiones unilaterales para congelar cuentas o mantener los recursos.</p>
<p>La razon anterior es suficiente para tener los fondos en carteras
independientes cuando se trate de sumas considerables.</p>
<h3 id="wallet">wallet</h3>
<p>Debido a mi pérfil técnico y a la variedad de criptomonedas que manejo, he
generado un script que me permite interactuar con las carteras de varios
protocolos de forma conveniente, incluidos, BTC, ETH, XRP, LTC, NEO, etc. Así
que será el método que describa. <strong>OJO: A menos que me conozca personalmente,
sugiero fuertemente que revise el código fuente a detalle para determinar su
confiabilidad.</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ wget https://raw.githubusercontent.com/javier-lopez/learn/master/sh/tools/wallet
$ chmod +x wallet && sudo mv wallet /usr/bin/wallet
$ wallet -h
Usage: wallet ARCHIVE
Dockerized [BTC|BCC|BTCP|LTC|NEO|ETH|XRP|ADA] thin client launcher.
Options:
-U, --update update this program to latest version
-V, --version display version
-h, --help show this message and exit
Examples:
$ wallet Electrum-3.2.2.tar.gz #BTC
$ wallet Electrum-LTC-3.1.3.1.tar.gz #LTC
$ wallet Electrum-BTCP-1.1.1.tar.gz #BTCP
$ wallet ElectronCash-3.3.1.tar.gz #BCC
$ wallet Neon-0.2.6-x86_64.Linux.AppImage #NEO
$ wallet etherwallet-v3.21.03.zip #ETH
$ wallet minimalist-ripple-client.html #XRP
$ wallet daedalus-0.11.0-cardano.exe #ADA
</code></pre></div></div>
<p>El script anterior requiere <a href="https://es.wikipedia.org/wiki/Ubuntu">Ubuntu Linux</a>,
<a href="https://es.wikipedia.org/wiki/Docker_(software)">Docker</a> y la cartera del
protocolo con el que se desee interactuar, por ejemplo para Bitcoin se
requerirá <a href="https://electrum.org">Electrum-3.2.2.tar.gz</a>.</p>
<h3 id="bitcoin-btc">Bitcoin, BTC</h3>
<p>Teniendo el script y la cartera, se puede comenzar a interactuar con el
blockchain, esto generalmente se hace para dos cosas, generar nuevas
cuentas, o gestionar existentes.</p>
<h2 id="generar-nueva-cuenta">Generar nueva cuenta</h2>
<p>Cada vez que se crea una nueva cuenta, en realidad se espera generar un par de
llaves (pública/privada) de dicha criptomoneda, por lo que este serán los datos
que estaremos búscando y almacenando. Para lanzar la cartera de BTC, se
ejecuta:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ wallet Electrum-3.2.2.tar.gz #BTC
Verify the archive before continuing!!!
SHA256: 69cc3eaef8cc88e92730f3f38850a83e66ffd51d9aa26364f35fd45d1cedaabb
SHA512: 32c4a24c2d3e2e38b9d66f6102176533a991b1c1fd25173bcd3bdd...3c87f15
Waiting 7 seconds.., press Ctrl-C to cancel
</code></pre></div></div>
<p>Es muy importante verificar que las sumatorias concuerden con las especificadas
por el proyecto (SHA256/SHA512), de lo contrario, esto indicaría modificaciones
(generalmente maliciosas) en los binarios.</p>
<p>La primera vez que se ejecute el script, se generará un entorno seguro /
reutilizable que permitirá lanzar las carteras soportadas. Este proceso puede
demorar hasta un par de hrs dependiendo la velocidad de internet, y sólo se
hace una vez.</p>
<p>Verificados los binarios se muestra la interfaz inicial.</p>
<p><strong><a href="/assets/img/wallet-btc-1.png"><img src="/assets/img/wallet-btc-1.png" alt="" /></a></strong></p>
<p>Las primeras pantallas muestran opciones de conexión y preferencias que
requeririan sus propios artículos, por el momento utilizaremos las opciones por
defecto para hacer el proceso tan rápido y eficiente como sea posible.</p>
<p>En la primera pantalla se selecciona <strong>Auto connect</strong>, y se da <strong>siguiente</strong>.</p>
<p><strong><a href="/assets/img/wallet-btc-2.png"><img src="/assets/img/wallet-btc-2.png" alt="" /></a></strong></p>
<p>En la segunda se pregunta por la ubicación de los datos de la cartera, se dejará
en <strong>default_wallet</strong> y se da <strong>siguiente</strong>.</p>
<p><strong><a href="/assets/img/wallet-btc-3.png"><img src="/assets/img/wallet-btc-3.png" alt="" /></a></strong></p>
<p>En la tercera, se pregunta por el tipo de cartera, la cartera estándar estará
bien, se da <strong>siguiente</strong>.</p>
<p><strong><a href="/assets/img/wallet-btc-4.png"><img src="/assets/img/wallet-btc-4.png" alt="" /></a></strong></p>
<p>Ahora se llega a la parte donde se elige si se desea generar una nueva cuenta o
abrir una existente. Para abrir una nueva cuenta se selecciona <strong>Create a new seed</strong>
y se da <strong>siguiente</strong>.</p>
<p><strong><a href="/assets/img/wallet-btc-5.png"><img src="/assets/img/wallet-btc-5.png" alt="" /></a></strong></p>
<p>Se selecciona el formato de las llaves , <strong>standard</strong> y se da <strong>siguiente</strong>.</p>
<p><strong><a href="/assets/img/wallet-btc-6.png"><img src="/assets/img/wallet-btc-6.png" alt="" /></a></strong></p>
<p>Ahora la cartera genera la <strong>llave privada</strong>, <strong>OJO</strong>, hay que tener mucho
cuidado con este dato, hay que mantenerlo privado y guardarlo en un lugar
seguro. Si se pierde u olvida los fondos se vuelven irrecuperables.</p>
<p><strong><a href="/assets/img/wallet-btc-7.png"><img src="/assets/img/wallet-btc-7.png" alt="" /></a></strong></p>
<p>Para asegurarse que el dato ha sido almacenado, en la siguiente
pantalla se pregunta por la llave privada.</p>
<p><strong><a href="/assets/img/wallet-btc-8.png"><img src="/assets/img/wallet-btc-8.png" alt="" /></a></strong></p>
<p>Ahora, se pregunta por una contraseña para cifrar de manera interna la
información recien generada, debido a la forma en que se abre la cartera no es
necesario establecer una, el entorno elimina los archivos temporales al
finalizar la sesión.</p>
<p>En su lugar, sera pertinente almacenar la <strong>llave privada</strong> en un archivo
cifrado y mantener ese archivo en un lugar seguro. Personalmente mantengo mis
llaves privadas en un archivo .txt y lo cifro con
<a href="https://es.wikipedia.org/wiki/GNU_Privacy_Guard">GPG</a>.</p>
<p><strong><a href="/assets/img/wallet-btc-9.png"><img src="/assets/img/wallet-btc-9.png" alt="" /></a></strong></p>
<p>Finalmente ha llegado el momento de interactuar con la nueva cuenta, desde esta
interfaz se puede consultar el saldo, enviar y recibir bitcoins. Para ver la llave
pública se va a la pestaña <code class="language-plaintext highlighter-rouge">recibir</code>.</p>
<p><strong><a href="/assets/img/wallet-btc-9.1.png"><img src="/assets/img/wallet-btc-9.1.png" alt="" /></a></strong></p>
<p>Listo!, ahora se tiene una nueva cuenta BTC.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Llave pública: 1JTg8e3yfCBbk22...
Llave privada: hub leopard broken neutral trash ...
</code></pre></div></div>
<h2 id="gestionar-cuenta-existente">Gestionar cuenta existente</h2>
<p>Si ya se cuenta con una llave privada anterior (la frase de 12 palabras) se
continua desde el 4to paso y esta vez se selecciona <strong>I already have a seed</strong>.</p>
<p><strong><a href="/assets/img/wallet-btc-10.png"><img src="/assets/img/wallet-btc-10.png" alt="" /></a></strong></p>
<p>Se introduce la llave privada y se da <strong>siguiente</strong></p>
<p><strong><a href="/assets/img/wallet-btc-11.png"><img src="/assets/img/wallet-btc-11.png" alt="" /></a></strong></p>
<p>Se preguntará por una contraseña para cifrar los datos temporales, no es
necesario, puesto que el script eliminará esos datos al cerrar la cartera. Se
da <strong>siguiente</strong></p>
<p><strong><a href="/assets/img/wallet-btc-12.png"><img src="/assets/img/wallet-btc-12.png" alt="" /></a></strong></p>
<p>Ya se puede interactuar con la cuenta para enviar o recibir BTC.</p>
<p><strong><a href="/assets/img/wallet-btc-13.png"><img src="/assets/img/wallet-btc-13.png" alt="" /></a></strong></p>
<h3 id="neo">Neo</h3>
<p>El proceso es similar para otra carteras, en cada caso, solo variará la
cartera, y la interfaz gráfica, veamos el caso de NEO.</p>
<h2 id="generar-nueva-cuenta-1">Generar nueva cuenta</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ wallet Neon-0.2.6-x86_64.Linux.AppImage
Verify the archive before continuing!!!
SHA256: 78276848a23d89db4d56965d94784c710d4281ca8085cfd0644644e08d1074bf
SHA512: 3bf87818885128ad74cd018fd2e437e32f274a5974b473b74bad84...9b5c548
</code></pre></div></div>
<p>Después de verificar las sumatorias aparece la interfaz gráfica.</p>
<p><strong><a href="/assets/img/wallet-neo-1.png"><img src="/assets/img/wallet-neo-1.png" alt="" /></a></strong></p>
<p>Aunque luce diferente, en realidad provee las mismas opciones. Para generar una
nueva cuenta se selecciona <strong>Create a new wallet</strong></p>
<p><strong><a href="/assets/img/wallet-neo-2.png"><img src="/assets/img/wallet-neo-2.png" alt="" /></a></strong></p>
<p>La cartera de NEO nos obliga a especificar una contraseña para cifrar los datos
temporales de la cartera, pero esa contraseña no es importante recordarla, se
puede poner una temporal.</p>
<p><strong><a href="/assets/img/wallet-neo-3.png"><img src="/assets/img/wallet-neo-3.png" alt="" /></a></strong></p>
<p>Una vez hecho, se proveen las llaves públicas y privadas.</p>
<p>Listo!, ahora también se tiene una nueva cuenta NEO.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Llave pública: AaUZif7n6FiVPyH...
Llave privada: KwuHYFeuJucZz94KanqaqaRkEi71mRdQbPZ...
</code></pre></div></div>
<h2 id="gestionar-cuenta-existente-1">Gestionar cuenta existente</h2>
<p>Si ya se cuenta con una llave privada de NEO anterior se selecciona la opción
<strong>Login using a private key</strong> en el menú principal.</p>
<p><strong><a href="/assets/img/wallet-neo-1.png"><img src="/assets/img/wallet-neo-1.png" alt="" /></a></strong></p>
<p>Se introduce la clave y se hace clic en <strong>Login</strong></p>
<p><strong><a href="/assets/img/wallet-neo-10.png"><img src="/assets/img/wallet-neo-10.png" alt="" /></a></strong></p>
<p>Listo, ahora se pueden enviar y recibir Neo’s y criptomonedas compatibles, como
GAS.</p>
<p><strong><a href="/assets/img/wallet-neo-11.png"><img src="/assets/img/wallet-neo-11.png" alt="" /></a></strong></p>
<hr />
<p>Listo, feliz manejo de sus finanzas 😋.</p>
<ul>
<li><a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/wallet">https://github.com/javier-lopez/learn/blob/master/sh/tools/wallet</a></li>
</ul>
say hi to react through docker integration2018-11-10T00:00:00+00:00http://javier.io/blog/en/2018/11/10/say-hi-to-react-through-docker-integration<h2 id="say-hi-to-react-through-docker-integration">say hi to react through docker integration</h2>
<h6 id="10-nov-2018">10 Nov 2018</h6>
<p>I’ve started learning <a href="https://reactjs.org/">react</a> and there is no way I
install <code class="language-plaintext highlighter-rouge">npm/yarn/create-react-app</code> or any non sense just to develop single
pages.</p>
<p>Therefore here are some instructions to encapsulate everything within docker
containers.</p>
<h3 id="bootstrapping">Bootstrapping</h3>
<p>Being react the complex stack of software it’s, it requires <code class="language-plaintext highlighter-rouge">create-react*</code>
scripts to bootstrap projects. Therefore the first step will be to create a
minimal container with such tools:</p>
<h2 id="dockerfile">Dockerfile</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>FROM node:alpine
RUN yarn global add create-react-app create-react-native-app react-native-cli
</code></pre></div></div>
<p>Let’s call this container <code class="language-plaintext highlighter-rouge">bootstrap-react</code>:</p>
<pre class="sh_sh">
$ docker build . -t bootstrap-react
</pre>
<p>Then we can bootstrap <code class="language-plaintext highlighter-rouge">react/react-native</code> projects as usual:</p>
<pre class="sh_sh">
$ docker run -it --rm --user "$(id -u):$(id -g)" \
-v "${PWD}":/usr/src/app -w /usr/src/app \
bootstrap-react \
create-react-app my-new-react-project
</pre>
<p>The <code class="language-plaintext highlighter-rouge">my-new-react-project</code> folder contains a bunch of files required to start
hacking away:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ tree my-new-react-project/ | head
┬
├── .gitignore
├── node_modules
│ ├── abab
│ │ ├── CHANGELOG.md
├── package.json
├── public
│ ├── index.html
├── src
├── App.css
├── App.js
</code></pre></div></div>
<h3 id="dockerizing-new-project">Dockerizing new project</h3>
<p>Now, wouldn’t be cool if every person could replicate the project with a single
step?, let’s add some more magic.</p>
<h2 id="docker-composeyml">docker-compose.yml</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>version: '3.4'
services:
app:
image: node:alpine
volumes:
- .:/usr/src/app
working_dir: /usr/src/app
command: sh -c "yarn && yarn start"
ports:
- "3000:3000"
</code></pre></div></div>
<p>That’s it!, now it can be launched and accessed through
<a href="http://localhost:3000">http://localhost:3000</a>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker-compose up
app_1 | You can now view my-new-react-project in the browser.
app_1 |
app_1 | Local: http://localhost:3000/
app_1 | On Your Network: http://172.18.0.2:3000/
</code></pre></div></div>
<p><strong><a href="/assets/img/react.png"><img src="/assets/img/react.png" alt="" /></a></strong></p>
<p>Sweet!, the autoreloading is working, so we can start modifying
files and the changes will show up instantly.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ vim src/App.js
#some changes later ...
</code></pre></div></div>
<p><strong><a href="/assets/img/react-hello-world.png"><img src="/assets/img/react-hello-world.png" alt="" /></a></strong></p>
<p>If you enjoyed the process but still don’t want to go thought all the steps,
feel free to grab the <a href="https://github.com/javier-lopez/docker-react-hello-world">docker-react-hello-world</a>
template as your starting point.</p>
<p>That’s it, happy coding, 😊</p>
removing passwords from git repositories2018-10-04T00:00:00+00:00http://javier.io/blog/en/2018/10/04/remove-passwords-from-git-repository<h2 id="removing-passwords-from-git-repositories">removing passwords from git repositories</h2>
<h6 id="04-oct-2018">04 Oct 2018</h6>
<p>Here’s how to remove a password from any file, in all revisions, in a git repository:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git filter-branch --tree-filter \
"find . -type f -exec sed -i -e 's/password/XXX/g' {} \;"
</code></pre></div></div>
<p>Another handy one, deleting all the lines containing <strong>word</strong>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git filter-branch --tree-filter \
"find . -type f -exec sed -i -e '/word/d' {} \;"
</code></pre></div></div>
<p>Finally, the classic remove file with sensitive data:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git filter-branch --force --index-filter \
'git rm --cached --ignore-unmatch PATH-TO-YOUR-FILE-WITH-SENSITIVE-DATA' \
--prune-empty --tag-name-filter cat -- --all
</code></pre></div></div>
<p>Now, to force push your changes to a remote repository:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git push -f
</code></pre></div></div>
<p>That’s it, happy safe coding, 😊</p>
<ul>
<li><a href="http://www.davidverhasselt.com/git-how-to-remove-your-password-from-a-repository/">http://www.davidverhasselt.com/git-how-to-remove-your-password-from-a-repository/</a></li>
</ul>
package python scripts and dependencies in single files with pex2018-10-02T00:00:00+00:00http://javier.io/blog/en/2018/10/02/package-python-script-and-dependencies-in-single-file-with-pex<h2 id="package-python-scripts-and-dependencies-in-single-files-with-pex">package python scripts and dependencies in single files with pex</h2>
<h6 id="02-oct-2018">02 Oct 2018</h6>
<p>So you want to create a pex that packages your script and its dependencies? Ok,
first to make our script! call it <strong>my-script.py</strong>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>import requests
if __name__ == '__main__':
req = requests.get("https://raw.githubusercontent.com/pantsbuild/pex/master/README.rst")
print req.text.split("\n")[0]
</code></pre></div></div>
<p><strong>requirements.txt</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>requests
</code></pre></div></div>
<p>Now, it’s time to package it!:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pex -o my-script.pex -D . -r requirements.txt -e my-script
my-script.pex
</code></pre></div></div>
<p>Done, but wait, are you too lazy to even download pex/pip?, try docker:</p>
<p><strong>Dockerfile</strong>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#FROM python:3.6.4
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y libev-dev python-pip
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
CMD [ "python", "my-script.py" ]
</code></pre></div></div>
<p>And then:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker build -t pex-builder .
$ docker run -v "$PWD:/usr/src/app" pex-builder \
pex -o my-script.pex -D . -r requirements.txt -e my-script
</code></pre></div></div>
<p>Done, now we have a (relative) portable way of distributing and running our scripts:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./my-script.pex
...
</code></pre></div></div>
<p>That’s it, happy packaging 😊</p>
<ul>
<li><a href="https://www.pither.com/simon/blog/2018/09/18/how-build-portable-executable-single-python-script">https://www.pither.com/simon/blog/2018/09/18/how-build-portable-executable-single-python-script</a></li>
</ul>
minos, a tiling wm linux distribution2018-08-22T00:00:00+00:00http://javier.io/blog/en/2018/08/22/minos-a-tiling-wm-linux-distro<h2 id="minos-a-tiling-wm-linux-distribution">minos, a tiling wm linux distribution</h2>
<h6 id="22-aug-2018">22 Aug 2018</h6>
<p>I’ve been working in my spare time in a yet another Linux respin for the last
7-8 years and I thought I better write something about it so my co-workers and
friends have a better time getting started.</p>
<h3 id="about">About</h3>
<p><a href="https://github.com/minos-org/">Minos</a> is a personal effort to get a stable,
performant and productive Linux system for power user/dev roles.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ Based on Ubuntu LTS releases with BedRock Linux support on its way
▸ 16.04 / 18.04 / 20.04
▸ Tiling window manager, i3wm + patches
▸ Full battery cli workflow, urxvt, tmux, vim, wicd, shundle, ...
▸ Non-intrusive and fast dmenu based launchers for sessions, process
management, virtualization, etc.
▸ Handpicked minimal yet powerful apps for common tasks:
▸ file manager - pcmanfm | login screen - slim
▸ image viewing - feh, sxiv | pdf reader - zathura
▸ music indexing - mpd | video player - mplayer2, umplayer
▸ network manager - wicd-curses | email client - mutt
▸ ...
</code></pre></div></div>
<p><strong><a href="/assets/img/minos-movie.png"><img src="/assets/img/minos-movie.png" alt="" /></a></strong></p>
<h3 id="principles">Principles</h3>
<p>In order to achieve its goal, minos design is lead by:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Minimalism → use as few elements as possible but not less
Coherence → based on modularity and composition, elements relate to each other
Stability → incremental over revolutionary
Control → extensive configuration options
Pluggable → plugin based components
Beauty → subjective, but right now mostly black =P
</code></pre></div></div>
<p>There exist two versions of the system:</p>
<ul>
<li><strong>Core: X less environment, ideal for servers.</strong></li>
<li><strong>Desktop: Graphic tiling wm environment for laptops/workstations.</strong></li>
</ul>
<h3 id="installation">Installation</h3>
<h4 id="ubuntu-lts-based-distro">Ubuntu LTS based distro</h4>
<p>On any Ubuntu LTS release add the Minos repository:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo add-apt-repository ppa:minos-archive/main
$ sudo apt-get update
</code></pre></div></div>
<p>And install the core or/and desktop metapackages:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo apt-get install -y minos-core
$ sudo apt-get install -y minos-desktop #includes minos-core
</code></pre></div></div>
<p>Or run the <a href="http://minos.io/s">http://minos.io/s</a> installer:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sh <(wget -q -O- minos.io/s) core
$ sh <(wget -q -O- minos.io/s) desktop
</code></pre></div></div>
<h4 id="live-ubuntu-lts-based-distro">Live Ubuntu LTS based distro</h4>
<p>From any [L/X/K]Ubuntu live usb run the <a href="http://minos.io/s">http://minos.io/s</a>
installer:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sh <(wget -q -O- minos.io/s) live core /dev/sdX username passwd [/dev/sdaY]
$ sh <(wget -q -O- minos.io/s) live desktop /dev/sdX username passwd [/dev/sdaY]
</code></pre></div></div>
<p>Where:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/dev/sdX → / mount point
username → admin minos user
passwd → admin minos user password
/dev/sdY → /home mount point (optional)
--release [16.04|18.04|20.04] (optional)
</code></pre></div></div>
<h4 id="vagrant">Vagrant</h4>
<p>Minos is also available as portable VirtualBox images:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ vagrant init minos/core-20.04 && vagrant up
$ vagrant init minos/desktop-20.04 && vagrant up
</code></pre></div></div>
<p>Additional boxes are located at
<a href="https://app.vagrantup.com/minos">https://app.vagrantup.com/minos</a></p>
<h3 id="getting-started">Getting started</h3>
<h4 id="aptdpkg">apt/dpkg</h4>
<p>Minos is based on Debian/Ubuntu, as such, it uses <code class="language-plaintext highlighter-rouge">apt/dpkg</code> tools to
manage/install software, some of the configuration changes include:</p>
<p><strong>Core</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ Recommend and suggested packages[0] are disabled by default. Use
→ $ sudo dpkg-reconfigure minos-core-settings #to change it
▸ shundle/aliazator add install, purge, remove, update, upgrade, aliases:
→ $ type install
> install is aliased to `sudo apt-get install --no-install-recommends'
* Use:
→ $ aliazator [enable|disable] apt-get #to modify this behavior
▸ eix is provided as an alternative apt-get/apt/aptitude interface
→ $ eix -h
</code></pre></div></div>
<p><strong>Desktop</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ Deb packages are cached and shared over avahi (zeroconf)
</code></pre></div></div>
<ul>
<li><a href="https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-depends">https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-depends</a></li>
<li><a href="https://www.unix-ag.uni-kl.de/~bloch/acng/">https://www.unix-ag.uni-kl.de/~bloch/acng/</a></li>
</ul>
<h4 id="static-get">static-get</h4>
<p><a href="https://github.com/minos-org/minos-static">static-get</a> is included as an
alternative installation medium allowing to fetch statically linked Linux
binaries.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ Search tmux available versions
→ $ static-get -s tmux
> tmux-1.9a-1.tar.xz:8ec9d37183d48d3e171e89b1dae6e212a5918262d10
> tmux-2.1-1.tar.xz:8172e0f2b39818ee747fa5b445a0a69342c11d1afa72
> tmux-2.2-1.tar.xz:f499f6e9368a5022f45b726759b588e52b16442ae2f3
▸ Download and extract the specified package
→ $ static-get -x tmux-2.2
> tmux-2.2-1.tar.xz
> tmux-2.2-1/
</code></pre></div></div>
<h4 id="shell--shundle">shell / shundle</h4>
<p>The default bash editing mode is set to <code class="language-plaintext highlighter-rouge">vi</code> with some <code class="language-plaintext highlighter-rouge">emacs</code> exceptions
meaning than common shortcuts like <code class="language-plaintext highlighter-rouge"><Ctrl-l> (clear screen)</code>, <code class="language-plaintext highlighter-rouge"><Ctrl-r>
(reverse cmd search)</code>, <code class="language-plaintext highlighter-rouge"><Ctrl-a>/<Ctrl-e> (star/beginning sentence)</code> work
the same while vi keybindings are used for all other actions. One of the
most powerful characteristics of this mode are text objects.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ text objects:
→ $ echo "text object" #pressing ci" while in the 'text' word results in
→ $ echo "" #removing the inner " characters
</code></pre></div></div>
<p>Math operations are recognized:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ examples include: +, -, *, /, %
→ $ 5 + 5
> 10
→ $ 7 \* 2.3
> 16.1
</code></pre></div></div>
<p>Search and other common actions are integrated within the shell:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ open the default web browser
→ $ 1999, binary finary :google
▸ or get back results from cli utils who output to console directly
→ $ howdoi format date bash
> today=`date +%Y-%m-%d.%H:%M:%S`
→ $ translate -en-pt 'deleted code is debugged code'
> Código eliminado é código depurado
▸ open resources by its name/suffix or by using the `open` launcher
→ $ /path/to/image.png
→ $ open https://wikipedia.org
</code></pre></div></div>
<p>Autocd, auto ls, and directory indexing are enabled for faster jumping between
directories:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ create and jump in one cmd
→ $ mkcd ~/a/long/path/including/a/directory/
▸ change directories without requiring cd
→ $ ~/a/long/path/including/a/
▸ index directory paths, see `man wcd`
→ $ update-cd
→ $ cd including #go to ~/a/long/path/including/
▸ backward pwd search for normal and versioned projects, see `command -v ,,,`
→ $ ,, long #go to ~/a/long
</code></pre></div></div>
<p><strong>Desktop</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ <Alt-Esc> is mapped to `dmenu-launcher` which supports the above
attributes plus clipboard integration
</code></pre></div></div>
<p>In order to provide additional yet optional characteristics, plugins based
components are offered, <a href="https://github.com/javier-lopez/shundle">shundle</a> is
the mechanism for which they’re managed, alternatives include
<a href="https://github.com/robbyrussell/oh-my-zsh">oh-my-zsh</a> /
<a href="https://github.com/sorin-ionescu/prezto">prezto</a> /
<a href="https://github.com/Bash-it/bash-it">bash-it</a>, etc. Shundle allows installing
scripts/modules which enrich the shell environment with sane defaults, aliases,
functions and prompt themes.</p>
<p>By default the following plugins are enabled (<strong>~/.profile.d/shundle.sh</strong>):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ aliazator: An aliases manager, providing hundred of aliases for common
commands, eg: apt-get, git, ssh, sudo, wget, vim, etc.
▸ autocd: Current directory autosaving (pwd), allows external applications
start "from here", used for new urxvt/tmux instances.
▸ colorize: Provides prompt, X resources and less/grep/ls themes.
▸ eternalize: Store an eternal history file across sessions
</code></pre></div></div>
<p>Shundle integration is provided by the <code class="language-plaintext highlighter-rouge">minos-core-settings</code> package, can be
disabled/enabled by running:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo dpkg-reconfigure minos-core-settings #shundle option
</code></pre></div></div>
<h4 id="tmux--tundle">tmux / tundle</h4>
<p>Simply speaking, <a href="https://tmux.github.io/">tmux</a> acts as a window manager for
terminals, on Minos, it’s configured and installed by default in pair with
<a href="https://mosh.org">mosh</a> to provide robust, secure and efficient access to
local and remote shell sessions.</p>
<p>tmux is launched on every incoming ssh connection and within the scratchpad
window <code class="language-plaintext highlighter-rouge"><Windows><Space></code> (desktop edition). Of course, it can also be
initialized manually.</p>
<p>The default prefix sequence has been changed from <code class="language-plaintext highlighter-rouge"><Ctrl-b></code> → <code class="language-plaintext highlighter-rouge"><Ctrl-a></code></p>
<p>As with the bash interpreter, tmux can be customized/extended through
additional plugins, Minos includes
<a href="https://github.com/javier-lopez/tundle">tundle</a> as the default tmux plugin
manager.</p>
<p>By default the following plugins are enabled (<strong>~/.tmux.conf</strong>):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ tmux-sensible: improve tmux defaults, including <Ctrl-a> as default
prefix.
▸ tmux-pain-control: rebinds default keybindings for pane management.
▸ tmux-yank: tmux/system clipboard integration
▸ tmux-resurrect: persists tmux environments across system restarts
▸ tmux-copycat: enhances tmux search to find easily files, git hashes,
urls, etc
</code></pre></div></div>
<p>Tundle integration is provided by the <code class="language-plaintext highlighter-rouge">minos-core-settings</code> package, and can
enabled/disabled by running:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo dpkg-reconfigure minos-core-settings #tundle option
</code></pre></div></div>
<h4 id="vim--vundle">vim / vundle</h4>
<p><a href="https://www.vim.org/">Vim</a> is a highly configurable text editor mostly used by
power users and developers to create content at the speed of thought. On Minos,
it’s included by default with the <code class="language-plaintext highlighter-rouge">vim-nox</code> and <code class="language-plaintext highlighter-rouge">vim-gtk</code> packages (the latter
only in the desktop version). <a href="https://github.com/javier-lopez/vundle">Vundle</a>
has been adopted as the default vim plugin manager.</p>
<p>A fair amount of vim plugins are included (~50), most of them are loaded on
demand or upon specific events in order to do not affect the editor startup
time. Some examples include:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Bundle 'bogado/file-line' "jump to line on startup, eg: $ vim file:23
Bundle 'mhinz/vim-signify' "git|svn modification viewer
Bundle 'tpope/vim-surround' "text :h objects on steroids
Bundle 'msanders/snipmate.vim' "snippet support
Bundle 'Shougo/neocomplcache' "autocompletion
</code></pre></div></div>
<p>Vundle integration is provided by the <code class="language-plaintext highlighter-rouge">minos-core-settings</code> package, run
the following command to disable/enable it:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo dpkg-reconfigure minos-core-settings #vundle option
</code></pre></div></div>
<h4 id="minos-tools">minos-tools</h4>
<p>Additional wrappers and power user scripts (>100) are available through the
<code class="language-plaintext highlighter-rouge">minos-core-tools</code> and <code class="language-plaintext highlighter-rouge">minos-desktop-tools</code> packages.</p>
<p><strong>Core</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ rm wrapper with nautilus/pcmanfm trash management integration
→ $ mkdir ~/a/long/path/including/a/directory
→ $ rm -r ~/a/long/path/including/a/directory
→ $ rm -l a #outputs recoverable files matching the 'a' pattern
→ $ rm -u a #recovers the files matching the 'a' pattern
→ $ ls ~/a/long/path/including/a/directory/
▸ compress / extract wrappers to ease archive creation/decompression.
→ $ touch a b c && compress a b c abc.tar.gz
→ $ rm -f a b c && extract abc.tar.gz
▸ text / image pastebins
→ $ cat ~/.bashrc | sprunge
> http://sprunge.us/AYZC
→ $ uimg image.png
> http://i.imgur.com/KyoFMH9.png
</code></pre></div></div>
<p><strong>Desktop</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ dmenu-* #dmenu based launchers, i3 window jumper,
#process/session management, vbox/xrandr/mpd/ wrappers
▸ watch-battery #battery notifier, suspend/hibernate the system if
#no manual action is taken
▸ player-ctrl #control multimedia players, mpd/mplayer/spotify
</code></pre></div></div>
<p>To get a full list of the included scripts run:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ dpkg -L minos-core-tools minos-desktop-tools
</code></pre></div></div>
<h3 id="minos-config">minos-config</h3>
<p>Minos is commanded by configuration files, those determinate global settings
(eg, wallpaper, autostart, etc), post-installation hooks, etc:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- $HOME/.minos/config
- /etc/minos/config
</code></pre></div></div>
<p>A simple ini like syntax is used, e.g.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/etc/minos/config
wallpaper /usr/share/minos/wallpapers/minos.png
</code></pre></div></div>
<p>To look-up a value, use <code class="language-plaintext highlighter-rouge">minos-config key</code>, e.g:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ minos-config wallpaper
/usr/share/minos/wallpapers/minos.png
</code></pre></div></div>
<p>See <code class="language-plaintext highlighter-rouge">minos-config -h</code> and
<a href="http://minos.io/doc/config">http://minos.io/doc/config</a> for further details.</p>
<h3 id="development">Development</h3>
<p>Minos uses a Rolling Release over LTS cycle, meaning it pushes frequent and
small updates to LTS releases and doesn’t provide named releases by itself.</p>
<p>There is a parity:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ 1 deb package => 2 git repositories
\
\__ program src
\__ deb packaging
</code></pre></div></div>
<p>Which requires a package to compile correctly in all LTS supported releases
with the same deb code in order to be accepted, other Debian based distros
create different packaging code for every release, that’s unacceptable in Minos
due to the limited human resources and general waste it would be.</p>
<p>Deb <strong>source files</strong> are located at
<a href="https://github.com/minos-org/">https://github.com/minos-org/</a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>foo-program (custom/freeze program)
foo-program-deb (deb packaging)
debian
rules
get-orig-source target (must retrieve content)
debian/README.source (step by step instructions to build package)
</code></pre></div></div>
<p>Deb <strong>binary packages</strong> are located at
<a href="https://launchpad.net/~minos-archive/+archive/main">https://launchpad.net/~minos-archive/+archive/main</a>,
and are created using daily recipes asociated to every source mirror.</p>
<p>In certain ocasions, base repositories are modified to introduce changes or
delete problematic files, those changes are automatic and described at:
<a href="https://github.com/minos-org/minos-sync">https://github.com/minos-org/minos-sync</a></p>
<h4 id="choosing-default-applications">Choosing default applications</h4>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>▸ Default applications are selected with good documentation, flexibility,
configurability and as few dependencies as possible in mind.
▸ Systems supporting composition/specialization are preferred over
generalization
▸ Keyboard oriented applications are preferred over pointing interfaces
▸ GUI programs are nice but rejected if they use ancient graphical
interfaces or use considerable resources.
▸ When in doubt http://suckless.org/rocks provide additional hints about
how software is selected into the project
</code></pre></div></div>
<h4 id="choosing-default-behavior">Choosing default behavior</h4>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> ▸ Toggle solutions are preferred over multichoice.
▸ Use/Set vi like applications/settings are preferred
▸ Defaults are configured with a focus in the out-of-the-box experience
</code></pre></div></div>
<h4 id="roadmap">Roadmap</h4>
<p>Minos is built on top of the most popular Linux distribution system to get a
lot of free software and an easy integration with third-party providers. At the
time of writing this, that’s Ubuntu, however future development should be
towards a multi-channel system such as <a href="http://subuser.org/">SubUser</a> or
<a href="https://bedrocklinux.org/">BedRock Linux</a>.</p>
<p>That’s it, happy tiling 😊</p>
disable broken touchpad device2018-02-19T00:00:00+00:00http://javier.io/blog/en/2018/02/19/disable-broken-touchpad<h2 id="disable-broken-touchpad-device">disable broken touchpad device</h2>
<h6 id="19-feb-2018">19 Feb 2018</h6>
<p>From time to time I accidentally drop liquids into my thinkpad laptop and the touchpad start behaving funny, when it happens I prefer to disable it completely for some hours/days until it gets fixed by itself.</p>
<p><strong>Distro: Ubuntu 16.04</strong></p>
<pre>
$ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=12 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=11 [slave pointer (2)] => THIS ONE
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Sleep Button id=8 [slave keyboard (3)]
↳ Integrated Camera: Integrated C id=9 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=10 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=13 [slave keyboard (3)]
$ xinput --disable 11
$ xinput --enable 11 #when the time comes
</pre>
<p>That’s it, happy accidents 😋</p>
<ul>
<li>https://askubuntu.com/questions/65951/how-to-disable-the-touchpad</li>
</ul>
x509: certificate signed by unknown authority docker error2018-02-07T00:00:00+00:00http://javier.io/blog/en/2018/02/07/x509-certificate-signed-by-unknown-authority-docker-error<h2 id="x509-certificate-signed-by-unknown-authority-docker-error">x509: certificate signed by unknown authority docker error</h2>
<h6 id="07-feb-2018">07 Feb 2018</h6>
<p>At work we use internal docker registers and from to time I encounter this error when trying to pull/push to https registers, so I’m leaving the procedure to add autosigned certificates for the future me and others.</p>
<p>Distro: Ubuntu 16.04</p>
<p>Docker: 17.12.0-ce, build c97c6d</p>
<pre>
$ # export registry certificate
$ openssl s_client -showcerts -connect \
registry.example.com:443 </dev/null 2>/dev/null | \
openssl x509 -outform PEM > registry.example.com.crt
$ # add it globally
$ sudo mv registry.example.com.crt /usr/local/share/ca-certificates
$ # update global certificates definitions
$ sudo update-ca-certificates
$ # restart affected services
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
</pre>
<p>That’s it, happy pulling/pushing 😋</p>
block ads in openwrt routers2017-11-27T00:00:00+00:00http://javier.io/blog/en/2017/11/27/block-ads-in-openwrt<h2 id="block-ads-in-openwrt-routers">block ads in openwrt routers</h2>
<h6 id="27-nov-2017">27 Nov 2017</h6>
<p>In previous posts I wrote about how to <a href="http://javier.io/blog/en/2017/11/26/block-youtube-in-openwrt.html">block YouTube and other services by ip</a>, this time I’ll show how to do the same by dns, kind of adblock for openwrt.</p>
<p>The target router is a <a href="http://www.amazon.com/TP-LINK-TL-WDR4300-Wireless-Gigabit-300Mbps/dp/B0088CJT4U">TP-Link N750</a>, and I’m using the latest <a href="http://downloads.openwrt.org/snapshots/trunk/ar71xx/">trunk build</a>.</p>
<p><strong><a href="/assets/img/98.jpg"><img src="/assets/img/98.jpg" alt="" /></a></strong></p>
<p>Since revision <a href="https://dev.openwrt.org/changeset/39312/">39312</a> OpenWRT configures the dnsmasq service to read from <strong>/tmp/dnsmasq.d/</strong>, so it’s easy to dump block list there and reload the dnsmasq service to block undesired domains.</p>
<pre>
# wget http://rawgit.com/javier-lopez/learn/master/sh/is/adblockupdater-openwrt -O /usr/bin/adblockupdater-openwrt
# chmod +x /usr/bin/adblockupdater-openwrt
# adblockupdater-openwrt
Getting yoyo ad list...
Getting winhelp2002 ad list...
Getting adaway ad list...
Getting hosts-file ad list...
Getting malwaredomainlist ad list...
Getting adblock.gjtech ad list...
Failed to establish connection
Getting someone who cares ad list...
69191 ad domains blocked.
Everything fine, restarting dnsmasq to implement new serverlist...
</pre>
<p>Add it to the cron job manager:</p>
<pre class="sh_sh">
# crontab -e
0 0 */1 * * /usr/bin/adblockupdater.sh
</pre>
<p>That’s it, happy blocking 😋</p>
<ul>
<li><a href="http://homepage.ruhr-uni-bochum.de/Jan.Holthuis/misc/adblock-on-your-openwrt-router/">Original blog post</a></li>
</ul>
block youtube by IP in openwrt routers2017-11-26T00:00:00+00:00http://javier.io/blog/en/2017/11/26/block-youtube-in-openwrt<h2 id="block-youtube-by-ip-in-openwrt-routers">block youtube by IP in openwrt routers</h2>
<h6 id="26-nov-2017">26 Nov 2017</h6>
<p>In previous posts I wrote about how to install <a href="http://javier.io/blog/en/2014/07/21/installing-openwrt-as-access-point.html">openwrt as an access point</a> or as a <a href="http://javier.io/blog/en/2014/06/10/installing-openwrt-as-wireless-repeater.html">wireless repeater</a>, this time I’ll show how to block youtube and other third party sites by ip. The procedure works in desktop / and mobile devices.</p>
<p>The target router is a <a href="http://www.amazon.com/TP-LINK-TL-WDR4300-Wireless-Gigabit-300Mbps/dp/B0088CJT4U">TP-Link N750</a>, and I’m using the latest <a href="http://downloads.openwrt.org/snapshots/trunk/ar71xx/">trunk build</a>.</p>
<p><strong><a href="/assets/img/98.jpg"><img src="/assets/img/98.jpg" alt="" /></a></strong></p>
<p>OpenWRT uses <a href="https://wiki.openwrt.org/doc/uci">UCI</a> to centralize configuration, firewall rules are located at:</p>
<ul>
<li><strong>/etc/config/firewall</strong></li>
</ul>
<p>In order to block sites by IP you’ll need to modify such file appending the desired rules, eg. for blocking YouTube:</p>
<pre>
config rule
option name Block-YouTube-187.189.89.77/16
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 187.189.89.77/16
option target REJECT
config rule
option name Block-YouTube-189.203.0.0/16
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 189.203.0.0/16
option target REJECT
config rule
option name Block-YouTube-64.18.0.0/20
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 64.18.0.0/20
option target REJECT
config rule
option name Block-YouTube-64.233.160.0/19
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 64.233.160.0/19
option target REJECT
config rule
option name Block-YouTube-66.102.0.0/20
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 66.102.0.0/20
option target REJECT
config rule
option name Block-YouTube-66.249.80.0/20
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 66.249.80.0/20
option target REJECT
config rule
option name Block-YouTube-72.14.192.0/18
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 72.14.192.0/18
option target REJECT
config rule
option name Block-YouTube-74.125.0.0/16
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 74.125.0.0/16
option target REJECT
config rule
option name Block-YouTube-173.194.0.0/16
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 173.194.0.0/16
option target REJECT
config rule
option name Block-YouTube-207.126.144.0/20
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 207.126.144.0/20
option target REJECT
config rule
option name Block-YouTube-209.85.128.0/17
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 209.85.128.0/17
option target REJECT
config rule
option name Block-YouTube-216.58.208.0/20
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 216.58.208.0/20
option target REJECT
config rule
option name Block-YouTube-216.239.32.0/19
option src lan
option family ipv4
option proto all
option dest wan
option dest_ip 216.239.32.0/19
option target REJECT
</pre>
<p>Ensure to reboot the firewall service to apply the changes:</p>
<pre class="sh_sh">
# /etc/init.d/firewall restart
</pre>
<p>That’s it, happy blocking 😋</p>
<ul>
<li><a href="http://wiki.openwrt.org/toh/tp-link/tl-wdr4300">Tl-wdr4300 in OpenWRT</a></li>
<li><a href="https://wiki.openwrt.org/doc/uci/firewall">OpenWRT Firewall Documentation</a></li>
<li><a href="https://stackoverflow.com/a/28797030/890858">YouTube IP Range</a></li>
</ul>
installing openwrt as a dumb access point2017-11-25T00:00:00+00:00http://javier.io/blog/en/2017/11/25/installing-openwrt-as-a-dumb-access-point<h2 id="installing-openwrt-as-a-dumb-access-point">installing openwrt as a dumb access point</h2>
<h6 id="25-nov-2017">25 Nov 2017</h6>
<p>In a previous post I wrote about how to use <a href="http://javier.io/blog/en/2017/11/23/installing-openwrt-as-access-point.html">openwrt as an independent access point</a>, this time however I’ll mention how to configure it to extend a network that already has a router with dhcp in place or where a subnet is not required / desired.</p>
<p>The target device is a <a href="http://www.amazon.com/TP-LINK-TL-WDR4300-Wireless-Gigabit-300Mbps/dp/B0088CJT4U">TP-Link N750</a>, and I’m using the latest <a href="http://downloads.openwrt.org/releases/18.06.2/targets/ar71xx/generic/openwrt-18.06.2-ar71xx-generic-tl-wdr4300-v1-squashfs-factory.bin">stable build</a>, the installation process is pretty straight forward.</p>
<p><strong><a href="/assets/img/98.jpg"><img src="/assets/img/98.jpg" alt="" /></a></strong></p>
<pre class="sh_sh">
$ wget downloads.openwrt.org/releases/18.06.2/targets/ar71xx/generic/openwrt-18.06.2-ar71xx-generic-tl-wdr4300-v1-squashfs-factory.bin
</pre>
<p>Or, when there is a previous openwrt version already installed:</p>
<pre class="sh_sh">
$ wget downloads.openwrt.org/releases/18.06.2/targets/ar71xx/generic/openwrt-18.06.2-ar71xx-generic-tl-wdr4300-v1-squashfs-sysupgrade.bin
</pre>
<p>After completing the download, install it by going to the <strong>Firmware Upgrade</strong> menu and selecting the openwrt firmware.</p>
<p><strong><a href="/assets/img/99.png"><img src="/assets/img/99.png" alt="" /></a></strong></p>
<p>The stable version already includes the <a href="https://github.com/openwrt/luci">luci web interface</a>, so there is no need to install anything else.</p>
<h3 id="configuration-via-web-interface-luci">Configuration via Web Interface LUCI</h3>
<p>Unplug all but your own computer to the device and wait for a valid ip, by default in the range 192.168.1.X, connect to the router through the <a href="http://192.168.1.1">http://192.168.1.1</a> address and select the <strong>LAN INTERFACE</strong></p>
<p>Edit with a valid static IP within the range of your main router, eg, (if your router has IP 192.168.1.1, enter 192.168.1.2). Set DNS and gateway to point into your main router to enable internet access for the dumb AP itself.</p>
<p><strong><a href="/assets/img/openwrt-dumb-ap-lan.png"><img src="/assets/img/openwrt-dumb-ap-lan.png" alt="" /></a></strong></p>
<p>Then scroll down and select the checkbox <strong>Ignore interface: Disable DHCP for this interface.</strong></p>
<p><strong><a href="/assets/img/openwrt-dumb-ap-disable-dhcp.png"><img src="/assets/img/openwrt-dumb-ap-disable-dhcp.png" alt="" /></a></strong></p>
<p>Before applying the change prepare the ethernet wire, you’ll have 30 seconds to connect it, request a new IP address and access the router web interface, otherwise it’ll revert the change and you’ll have to redo the configuration. Use a LAN/switch from your main router to a LAN/switch of your dumb AP, <strong>avoid the WAN/Internet ports</strong>, click <strong>Save & Apply</strong>.</p>
<p>Access the dumb AP (on this example) through the <a href="http://10.9.8.7">http://10.9.8.7</a> IP, and go to the <strong>Network ▷ Interfaces</strong> page for disabling the <strong>WAN</strong> interfaces.</p>
<p><strong><a href="/assets/img/openwrt-dumb-ap-disable-wan-interfaces.png"><img src="/assets/img/openwrt-dumb-ap-disable-wan-interfaces.png" alt="" /></a></strong></p>
<p>We’re almost done, as a final step, setup the wireless APs, go to <strong>Network ▷ Wireless</strong> section and configure as many as desired Access Points and link them to the <strong>LAN</strong> <strong>Network</strong></p>
<p><strong><a href="/assets/img/openwrt-dumb-ap-wireless-details.png"><img src="/assets/img/openwrt-dumb-ap-wireless-details.png" alt="" /></a></strong></p>
<p><strong><a href="/assets/img/openwrt-dumb-ap-wireless-general.png"><img src="/assets/img/openwrt-dumb-ap-wireless-general.png" alt="" /></a></strong></p>
<p>That’s it!, enjoy your extended network ✌</p>
<ul>
<li><a href="https://openwrt.org/docs/guide-user/network/wifi/dumbap">https://openwrt.org/docs/guide-user/network/wifi/dumbap</a></li>
</ul>
installing openwrt as an access point2017-11-24T00:00:00+00:00http://javier.io/blog/en/2017/11/24/installing-openwrt-as-access-point<h2 id="installing-openwrt-as-an-access-point">installing openwrt as an access point</h2>
<h6 id="24-nov-2017">24 Nov 2017</h6>
<p>In previous post I wrote about how to use <a href="http://javier.io/blog/en/2014/06/10/installing-openwrt-as-wireless-repeater.html">openwrt as a wireless repeater</a>, this time I’ll use it as an independent access point with its own subnet, how practical!</p>
<p>The target device is a <a href="http://www.amazon.com/TP-LINK-TL-WDR4300-Wireless-Gigabit-300Mbps/dp/B0088CJT4U">TP-Link N750</a>, and I’m using the latest <a href="http://downloads.openwrt.org/snapshots/trunk/ar71xx/">trunk build</a>, the installation process is pretty straigh forward.</p>
<p><strong><a href="/assets/img/98.jpg"><img src="/assets/img/98.jpg" alt="" /></a></strong></p>
<pre class="sh_sh">
$ wget downloads.openwrt.org/snapshots/trunk/ar71xx/generic/openwrt-ar71xx-generic-tl-wdr4300-v1-squashfs-factory.bin
</pre>
<p>Or, when there is a previous openwrt version installed:</p>
<pre class="sh_sh">
$ wget downloads.openwrt.org/snapshots/trunk/ar71xx/generic/openwrt-ar71xx-generic-tl-wdr4300-v1-squashfs-sysupgrade.bin
</pre>
<p>After completing the download, install it by going to the <strong>Firmware Upgrade</strong> menu and selecting the openwrt firmware.</p>
<p><strong><a href="/assets/img/99.png"><img src="/assets/img/99.png" alt="" /></a></strong></p>
<p>Be aware that the trunk build is minimal, it doesn’t include the <a href="https://github.com/openwrt/luci">luci web interface</a>, so it’s up to every person to decide if they want it or not.</p>
<p>To install additional software connect to the device and share temporary your laptop/desktop internet</p>
<pre class="sh_sh">
# flush previous iptables rules
$ sudo iptables -F
$ sudo iptables -X
$ sudo iptables -t nat -F
$ sudo iptables -t nat -X
$ sudo iptables -t mangle -F
$ sudo iptables -t mangle -X
$ sudo iptables -P INPUT ACCEPT
$ sudo iptables -P FORWARD ACCEPT
$ sudo iptables -P OUTPUT ACCEPT
</pre>
<pre class="sh_sh">
# route laptop traffic through wlan0 (wireless) interface
$ sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
$ echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
$ while true; do sudo ifconfig eth0 192.168.1.2; sleep 1; done
$ telnet 192.168.1.1 #type "passwd" to set the root passwd
# be aware than in current openwrt releases telnet is no longer provided
# in those cases just skip this step
$ ssh root@192.168.1.1 #from other terminal window
openwrt # passwd #set the root passwd in case telnet service isn't available
openwrt # ifconfig br-lan 10.9.8.7
$ while true; do sudo ifconfig eth0 10.9.8.10; sleep 1; done #bypass networkmanager
$ ssh root@10.9.8.7
openwrt # route add default gw 10.9.8.10
openwrt # echo "nameserver 8.8.8.8" > /etc/resolv.conf
openwrt # opkg update
openwrt # opkg install luci
openwrt # /etc/init.d/uhttpd enable
openwrt # /etc/init.d/uhttpd start
</pre>
<p>Upon completing the installation, go to <a href="http://10.9.8.7">http://10.9.8.7</a> and reconfigure the LAN interface to make permanent the IP address:</p>
<ul>
<li>Network ▷ Interfaces ▷ LAN</li>
</ul>
<p><strong><a href="/assets/img/100.png"><img src="/assets/img/100.png" alt="" /></a></strong></p>
<p>Create the Access Point (linked to the <strong>lan</strong> interface)</p>
<ul>
<li>Network ▷ Wifi ▷ Add</li>
</ul>
<p><strong><a href="/assets/img/openwrt-ap.png"><img src="/assets/img/openwrt-ap.png" alt="" /></a></strong></p>
<p>Connect an ethernet cable to the WAN interface (on this device it’s a blue port behind) and enjoy!, happy browsing ✌</p>
<ul>
<li><a href="http://wiki.openwrt.org/toh/tp-link/tl-wdr4300">http://wiki.openwrt.org/toh/tp-link/tl-wdr4300</a></li>
</ul>
installing openwrt as a wireless repeater2017-11-23T00:00:00+00:00http://javier.io/blog/en/2017/11/23/installing-openwrt-as-wireless-repeater<h2 id="installing-openwrt-as-a-wireless-repeater">installing openwrt as a wireless repeater</h2>
<h6 id="23-nov-2017">23 Nov 2017</h6>
<p>Last weekend I spend some time at my parents house and the occasion was appropriate to extend the wifi signal to cover the whole house, since I don’t intend to repeat the setup in the nearby future but would like still to have a reference, I decided to wrap it up in a post ☺</p>
<p>First thing I did was to grab a TP-Link N750, formally a <a href="http://www.amazon.com/TP-LINK-TL-WDR4300-Wireless-Gigabit-300Mbps/dp/B0088CJT4U">TL-WDR4300 Version 1.7</a> router for $60 at a nearby shop, I didn’t choose it for anything particular, but because of its nice antennas and dual-band support.</p>
<p><strong><a href="/assets/img/98.jpg"><img src="/assets/img/98.jpg" alt="" /></a></strong></p>
<p>Getting the latest openwrt <a href="http://downloads.openwrt.org/snapshots/trunk/ar71xx/">trunk build</a> and install it on the device is pretty straight forward.</p>
<pre class="sh_sh">
$ wget downloads.openwrt.org/snapshots/trunk/ar71xx/generic/openwrt-ar71xx-generic-tl-wdr4300-v1-squashfs-factory.bin
</pre>
<p>Or to upgrade it from a previous release:</p>
<pre class="sh_sh">
$ wget downloads.openwrt.org/snapshots/trunk/ar71xx/generic/openwrt-ar71xx-generic-tl-wdr4300-v1-squashfs-sysupgrade.bin
</pre>
<p>To flash the image go to the <strong>System Tools ▷ Firmware Upgrade</strong> menu</p>
<p><strong><a href="/assets/img/99.png"><img src="/assets/img/99.png" alt="" /></a></strong></p>
<p>Be aware that the trunk build is minimal, it doesn’t include the <a href="luci.subsignal.org">luci web interface</a>, so it’s up to every person to decide if they want it or not.</p>
<p>To install additional software connect to the device and share temporary your laptop/desktop internet</p>
<pre class="sh_sh">
# flush previous iptables rules
$ sudo iptables -F
$ sudo iptables -X
$ sudo iptables -t nat -F
$ sudo iptables -t nat -X
$ sudo iptables -t mangle -F
$ sudo iptables -t mangle -X
$ sudo iptables -P INPUT ACCEPT
$ sudo iptables -P FORWARD ACCEPT
$ sudo iptables -P OUTPUT ACCEPT
</pre>
<pre class="sh_sh">
# route laptop traffic through wlan0 (wireless) interface
$ sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
$ echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
$ while true; do sudo ifconfig eth0 192.168.1.2; sleep 1; done
$ telnet 192.168.1.1 #type "passwd" to set the root passwd
# be aware than in current openwrt releases telnet is no longer provided
# in those cases just skip this step
$ ssh root@192.168.1.1 #from other terminal window
openwrt # passwd #set the root passwd in case telnet service wasn't available
openwrt # ifconfig br-lan 10.9.8.7
$ while true; do sudo ifconfig eth0 10.9.8.10; sleep 1; done #bypass networkmanager
$ ssh root@10.9.8.7
openwrt # route add default gw 10.9.8.10
openwrt # echo "nameserver 8.8.8.8" > /etc/resolv.conf
openwrt # opkg update
openwrt # opkg install luci relayd
openwrt # /etc/init.d/uhttpd enable
openwrt # /etc/init.d/uhttpd start
openwrt # /etc/init.d/relayd enable
openwrt # /etc/init.d/relayd start
</pre>
<p>Upon completing the installation, go to the web interface, <a href="http://10.9.8.7">http://10.9.8.7</a>, and reconfigure the LAN interface to make permanent the IP address:</p>
<ul>
<li>Network ▷ Interfaces ▷ LAN</li>
</ul>
<p><strong><a href="/assets/img/100.png"><img src="/assets/img/100.png" alt="" /></a></strong></p>
<p>Now, it’s time to create the bridge interface (bonding the <strong>lan</strong> and <strong>wwan</strong> interfaces)</p>
<p><strong><a href="/assets/img/101.png"><img src="/assets/img/101.png" alt="" /></a></strong></p>
<p><strong><a href="/assets/img/openwrt-bridge.png"><img src="/assets/img/openwrt-bridge.png" alt="" /></a></strong></p>
<p>And join the nearby AP (linked to the <strong>bridge/wwan</strong> interface)</p>
<ul>
<li>Network ▷ Wifi ▷ Scan</li>
</ul>
<p><strong><a href="/assets/img/openwrt-client.png"><img src="/assets/img/openwrt-client.png" alt="" /></a></strong></p>
<p>Finally, don’t forget to create the AP repeater (linked to the <strong>lan</strong> interface)</p>
<ul>
<li>Network ▷ Wifi ▷ Add</li>
</ul>
<p><strong><a href="/assets/img/openwrt-ap.png"><img src="/assets/img/openwrt-ap.png" alt="" /></a></strong></p>
<p>That’s it!, a simple and robust wifi extender ✌</p>
<ul>
<li><a href="http://wiki.openwrt.org/toh/tp-link/tl-wdr4300">http://wiki.openwrt.org/toh/tp-link/tl-wdr4300</a></li>
<li><a href="http://tombatossals.github.io/openwrt-repetidor-wireless/">http://tombatossals.github.io/openwrt-repetidor-wireless/</a></li>
</ul>
staticus, a poor man status page generator2016-05-19T00:00:00+00:00http://javier.io/blog/en/2016/05/19/staticus-poor-man-status-page-generator<h2 id="staticus-a-poor-man-status-page-generator">staticus, a poor man status page generator</h2>
<h6 id="19-may-2016">19 May 2016</h6>
<p>I’m not sure what excuse to use to back this entry, I guess I’m just a lazy and irresponsible person, last week I got myself in need for a basic status page generator and all the alternatives I looked at were either too complicated or non free (as in speech and beer), so I decided to go my own (that’s the irresponsible part) and bundle everything in a single shell script to avoid dependencies (that’s the lazy one).</p>
<p><a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/staticus">Staticus</a> is the result.</p>
<p><strong><a href="/assets/img/staticus-1.png"><img src="/assets/img/staticus-1.png" alt="" /></a></strong></p>
<p>The tools itself is pretty simple, to generate the above picture I ran:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>staticus go #generate staticus.txt and staticus.html in the current directory
</code></pre></div></div>
<p>Could be added to a cronjob to run periodically</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>* * * * * /path/to/staticus -o /var/www/status/index.html -O /tmp/staticus.txt
</code></pre></div></div>
<p>The script accepts several options, however to set threshold values and other <em>advanced</em> parameters it’s best to define a configuration file (by default /etc/staticus.conf):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>module_memory_threshold="80"
module_swap_threshold="80"
module_load_threshold="4"
module_storage_threshold="80"
</code></pre></div></div>
<p>Other possible values are described in the <a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/staticus#L8">configuration section</a>.</p>
<p>That’s it, if you ever use staticus, you didn’t get it from me 😋</p>
<ul>
<li><a href="https://github.com/jayfk/statuspage">@jayfk’s statuspage project, from where I stole the html theme</a></li>
</ul>
terminfo variable2016-02-12T00:00:00+00:00http://javier.io/blog/en/2016/02/12/terminfo-error-opening-terminal<h2 id="terminfo-variable">terminfo variable</h2>
<h6 id="12-feb-2016">12 Feb 2016</h6>
<p>This is a quick reminder to my future self about how to fix some annoying TERM errors</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./bin/atop
Error opening terminal: rxvt-unicode-256color.
</code></pre></div></div>
<p>On these cases it could help to define the TERM variable to a more standard type, eg;</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>TERM=xterm ./bin/atop
</code></pre></div></div>
<p>Or/and specify the TERMINFO variable, eg:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>TERMINFO='/usr/share/terminfo/' ./bin/atop
</code></pre></div></div>
<p>This is specially useful for compiled programs who configured the TERMINFO variable to a different one during compilation time.</p>
<p>That’s it, happy launching 😋</p>
<ul>
<li><a href="http://stackoverflow.com/questions/12345675/screen-cannot-find-terminfo-entry-for-xterm-256color">screen cannot find terminfo entry</a></li>
</ul>
a simple cli upnp/dlna browser2016-01-22T00:00:00+00:00http://javier.io/blog/en/2016/01/22/simple-upnp-dlna-browser<h2 id="a-simple-cli-upnpdlna-browser">a simple cli upnp/dlna browser</h2>
<h6 id="22-jan-2016">22 Jan 2016</h6>
<p>Last weekend I installed a usb hard disk to my <a href="https://openwrt.org/">openwrt</a> <a href="http://javier.io/blog/en/2014/06/10/installing-openwrt-as-wireless-repeater.html">router</a>, added some content, setup <a href="https://wiki.openwrt.org/doc/uci/minidlna">minidlna</a> and called it a day, easy way to stream movies locally. I tested the setup with all my endpoints and while it worked great with most of them I was having problems streaming to my Linux laptop, that’s funny considering the router itself runs the same OS.</p>
<p>Looking around I read suggestions about installing vlc,totem,xbmc,etc. All of those media players are great however I already have mplayer2 which is able to play http streams, that’s <a href="https://gxben.wordpress.com/2008/08/24/why-do-i-hate-dlna-protocol-so-much/">a great deal</a> about upnp/dlna. So I took some time to hack a quick and dirty script and that’s how <a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/simple-dlna-browser">simple-dlna-browser</a> was born.</p>
<pre class="sh_sh">
$ simple-dlna-browser
Usage: simple-dlna-browser [OPTIONS] PATTERN
</pre>
<h3 id="examples">Examples</h3>
<pre class="sh_sh">
$ simple-dlna-browser -l #autodetection requires 'socat'
http://192.168.1.254:8200/rootDesc.xml (Multimedia)
┬
├── Apocalipto
├── Contacto
├── Coraline.y.la.puerta.secreta
...
$ simple-dlna-browser contacto | xargs mplayer
$ simple-dlna-browser -s 192.168.1.254 contacto | xargs mplayer
</pre>
<p>I used minidlna 1.1.4-2 as a reference, so it may not work with other media servers.</p>
<p>That’s it, happy streaming 😋</p>
<ul>
<li><a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/simple-dlna-browser">https://github.com/javier-lopez/learn/blob/master/sh/tools/simple-dlna-browser</a></li>
</ul>
genpass, yet another stateless password generator2016-01-07T00:00:00+00:00http://javier.io/blog/en/2016/01/07/genpass-yet-another-stateless-password-generator<h2 id="genpass-yet-another-stateless-password-generator">genpass, yet another stateless password generator</h2>
<h6 id="07-jan-2016">07 Jan 2016</h6>
<p>Since some time I’ve realized I’m pretty bad at memorizing strong passwords, as a result I’ve been using an unique moderated strong “master” password and I’ve derived others by using a shell alias:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ alias getpass='_getpass() { _g=$(printf "%s" "${*}" | \
md5sum | openssl enc -base64 | \
cut -c1-20); printf "%s" "${_g}" | \
xclip -selection clipboard 2>/dev/null || \
printf "%s\\n" "${_g}"; }; _getpass'
</code></pre></div></div>
<p>I knew than the resulting passwords weren’t really good at keeping my master password secure, after all, md5 hashing is extremely fast and with known <a href="http://www.mscs.dal.ca/~selinger/md5collision/">collision</a> <a href="http://natmchugh.blogspot.mx/2015/02/create-your-own-md5-collisions.html">problems</a>, even worse, I didn’t even iterate over it. So I keep it in secret till I had the time or willingness to use an informed solution. During the last month I’ve been reviewing the state of art of password generation schemes and found than some <a href="https://en.wikipedia.org/wiki/Bcrypt">derivation</a> <a href="https://en.wikipedia.org/wiki/Scrypt">functions</a> have been designed specifically for this task. So I converted the shell alias to a C program and <a href="https://github.com/javier-lopez/genpass">genpass</a> is the result.</p>
<p>Genpass is by no means original, however when looking around I found than most password generators were plain broken, most of them were iterating fast checksum functions, md5, sha1, sha512, etc, or the ones using either bcrypt or scrypt used hard-coded parameters which could make them vulnerable to future computers. Using a slow key derivation function is as practical as the user is willing to wait, so often more secure default parameters aren’t used because it would make the derivation painfully slow. Fortunately some smart guys have found this <a href="https://www.cs.utexas.edu/%7Ebwaters/publications/papers/www2005.pdf">problem before</a> and have suggested to use a cache key to accelerate the process for legitimate users. That’s what genpass uses to propose the following paranoid defaults.</p>
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cache cost (Scrypt N)</td>
<td>2^20</td>
</tr>
<tr>
<td>Cost (Scrypt N)</td>
<td>2^14</td>
</tr>
<tr>
<td>Scrypt r</td>
<td>8 bits</td>
</tr>
<tr>
<td>Scrypt p</td>
<td>16 bits</td>
</tr>
<tr>
<td>Key length</td>
<td>32 bytes, 256 bits</td>
</tr>
<tr>
<td>Encoding</td>
<td>z85</td>
</tr>
</tbody>
</table>
<p>It’s still convenient to use your own parameters, as the default settings will change as computers get updated on CPU/RAM.</p>
<h3 id="usage">Usage</h3>
<p><a href="https://raw.githubusercontent.com/javier-lopez/genpass/master/genpass.gif"><img src="https://raw.githubusercontent.com/javier-lopez/genpass/master/genpass.gif" alt="" style="border: 1px solid white;margin-bottom: 3%;" /></a>
<!--$ genpass-->
<!--Name: Guy Mann-->
<!--Site: github.com-->
<!--Master password: passwd #it won't be shown-->
<!--4c%7hZ5w]MZUB6RRPCJ&?wKTFtd[6Oj.P.02d+kIs--></p>
<p>The first time to be executed it will take a relative long time (a couple of minutes) to get back. It’ll create a cache key and will save it to <code class="language-plaintext highlighter-rouge">~/.genpass-cache</code> (this path can be customized), then it will combine it with the master password and the site string to generate the final password, which can be in several encodings (z85 by default). The cache key file should be guarded with moderate caution, if it gets leaked possible attackers may have an easier time guessing the master password (although it still will be considerably harder than average brute force attacks). Later invocations will be instantly (taking on average 0.1secs). This way the scheme strives for the best balance between security and usability.</p>
<p>I’ve also added a <code class="language-plaintext highlighter-rouge">getpass</code> wrapper which paste the resulting password to the system clipboard and sets a timeout (10 seconds by default) after which the password is removed.</p>
<h3 id="installation">Installation</h3>
<h5 id="ubuntu-based-systems-lts">Ubuntu based systems (LTS)</h5>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo add-apt-repository ppa:minos-archive/main
$ sudo apt-get update && sudo apt-get install genpass
</code></pre></div></div>
<h5 id="other-linux-distributions-static-binaries">Other Linux distributions, static binaries</h5>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sh <(wget -qO- s.minos.io/s) -x genpass
</code></pre></div></div>
<h5 id="from-source">From source</h5>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ make
</code></pre></div></div>
<p>That’s it, happy password generation 😋</p>
<p>References</p>
<ul>
<li><a href="https://github.com/javier-lopez/genpass">genpass</a></li>
<li><a href="https://en.wikipedia.org/wiki/Bcrypt">bcrypt</a>, slow key derivation function</li>
<li><a href="https://en.wikipedia.org/wiki/Scrypt">scrypt</a>, slow key derivation function</li>
<li><a href="https://www.pwdhash.com/">pwdhash</a>, md5 based password generator, js</li>
<li><a href="http://www.supergenpass.com/">supergenpass</a>, md5 iteration based password generator, js</li>
<li><a href="http://passwordmaker.org">passwordmaker</a>, md5, sha1, sha256, <a href="http://passwordmaker.org/FAQ#Which_hash_algorithms_are_supported.3F">etc</a> based password generator, several implementations</li>
<li><a href="http://masterpasswordapp.com/">masterpassword</a>, hard-coded scrypt based password generator, several implementations</li>
<li><a href="https://github.com/kaepora/npwd">npwd</a>, hard-coded scrypt based password generator, Nodejs</li>
<li><a href="https://github.com/postboy/cpwd">cpwd</a>, hard-coded scrypt based password generator, C</li>
</ul>
a simple mpc web interface2016-01-02T00:00:00+00:00http://javier.io/blog/en/2016/01/02/simple-mpc-web-interface<h2 id="a-simple-mpc-web-interface">a simple mpc web interface</h2>
<h6 id="02-jan-2016">02 Jan 2016</h6>
<p>Sometimes while listening music at my desk my niece (~8y/o) shows up and asks me to skip the current song, most of the times I do it instantly, however when I’m really busy I may delay some seconds, in those occasions she goes over my keyboard and press the <code class="language-plaintext highlighter-rouge">next</code> button by herself. Today morning was one of those days, so I though it shouldn’t be too difficult to install a mpd client in her ipad to give her full control =)</p>
<p>It turned to be more trouble that I though, first, there are no free mpd clients on the ipad software store (or it’s not available in my region, LATAM), and most <a href="http://mpd.wikia.com/wiki/Clients">web clients</a> require a fair amount of dependencies and some work to get them running. I don’t want yet another service to maintain, so I decided to hack a simple web interface for <code class="language-plaintext highlighter-rouge">mpc</code>, based on Gwenn Englebienne previous work on <a href="http://www.gwenn.dk/mplayer-remote.html">mplayer</a> and this is the result:</p>
<pre class="sh_sh">
$ wget https://raw.githubusercontent.com/javier-lopez/learn/master/python/simple-mpc-remote
$ python simple-mpc-remote -p 8080
Started httpserver on port 8080
</pre>
<p><strong><a href="/assets/img/simple-mpc-remote.png"><img src="/assets/img/simple-mpc-remote.png" alt="" /></a></strong></p>
<p><code class="language-plaintext highlighter-rouge">simple-mpc-remote</code> has no dependencies, other than python +2.7, mpc and mpd and it’s really simple to install/use. Since it does little effort to sanitize input it could be dangerous, however since I trust my local network I’ll leave it like that for now.</p>
<p>Happy skipping 😋</p>
<ul>
<li><a href="http://www.gwenn.dk/mplayer-remote.html">mplayer-remote</a></li>
<li><a href="https://raw.githubusercontent.com/javier-lopez/learn/master/python/simple-mpc-remote">simple-mpc-remote</a></li>
</ul>
using imagemagick, awk and kmeans to find dominant colors in images2015-09-30T00:00:00+00:00http://javier.io/blog/en/2015/09/30/using-imagemagick-and-kmeans-to-find-dominant-colors-in-images<h2 id="using-imagemagick-awk-and-kmeans-to-find-dominant-colors-in-images">using imagemagick, awk and kmeans to find dominant colors in images</h2>
<h6 id="30-sep-2015">30 Sep 2015</h6>
<p>Some days ago I was reading <a href="http://charlesleifer.com/blog/using-python-to-generate-awesome-linux-desktop-themes/">“Using python to generate awesome linux desktop themes”</a> and got impressed by a technique to obtain dominant colors from images, I went ahead and tried to run the examples but <a href="http://www.pythonware.com/products/pil/">PIL</a> proved difficult to install, so I looked around to see if I could replace it for some other utility and it turned out that <a href="http://www.imagemagick.org/script/convert.php"><strong>convert</strong></a> (which is part of the imagemagick package) is powerful enough for the duty.</p>
<p>Besides resizing, <strong>convert</strong> can output the rgb values of any image, so I reimplemented the kmean algorithm on awk and that’s how <a href="https://raw.githubusercontent.com/javier-lopez/learn/master/sh/tools/dcolors">dcolors</a> was born. By default dcolor will resize (on RAM) the input image to 25x25 using a 1px deviation and 3 clusters for an average time of 1s per image, further customization are possible to increase quality, quantity or performance.</p>
<pre class="lyric">
$ time dcolors akira_800x800.jpg
163,80,50
65,77,93
40,26,34
real 0m1.176s
</pre>
<p><strong><a href="/assets/img/akira_800x800.jpg"><img src="/assets/img/akira_800x800.jpg" alt="" /></a></strong></p>
<center>
<span style="background-color: #a35032"> </span>
<span style="background-color: #414d5d"> </span>
<span style="background-color: #281a22"> </span>
</center>
<p></p>
<pre class="lyric">
$ time ./dcolors --resize 100x100 -d 10 akira-cycle-2.png
49,85,118
19,42,69
125,173,165
real 0m3.188s
</pre>
<p><strong><a href="/assets/img/akira-cycle-2_800x800.png"><img src="/assets/img/akira-cycle-2_800x800.png" alt="" /></a></strong></p>
<center>
<span style="background-color: #315576"> </span>
<span style="background-color: #132a45"> </span>
<span style="background-color: #7dada5"> </span>
</center>
<p></p>
<pre class="lyric">
$ time ./dcolors -f hex -k 8 akira-neo-tokyo-7_800x800.png
#495D66
#223634
#1C293A
#68706E
#3C4F4A
#38495D
#293C48
#0B1016
real 0m1.005s
</pre>
<p><strong><a href="/assets/img/akira-neo-tokyo-7_800x800.png"><img src="/assets/img/akira-neo-tokyo-7_800x800.png" alt="" /></a></strong></p>
<center>
<span style="background-color: #495D66"> </span>
<span style="background-color: #223634"> </span>
<span style="background-color: #1C293A"> </span>
<span style="background-color: #68706E"> </span>
<span style="background-color: #3C4F4A"> </span>
<span style="background-color: #38495D"> </span>
<span style="background-color: #293C48"> </span>
<span style="background-color: #0B1016"> </span>
</center>
<p>That’s it, happy hacking 😋</p>
<ul>
<li><a href="http://charlesleifer.com/blog/using-python-and-k-means-to-find-the-dominant-colors-in-images/">http://charlesleifer.com/blog/using-python-and-k-means-to-find-the-dominant-colors-in-images/</a></li>
<li><a href="https://en.wikipedia.org/wiki/K-means_clustering">https://en.wikipedia.org/wiki/K-means_clustering</a></li>
</ul>
install apt packages from deb postinst2015-09-10T00:00:00+00:00http://javier.io/blog/en/2015/09/10/apt-packages-from-deb-postinst<h2 id="install-apt-packages-from-deb-postinst">install apt packages from deb postinst</h2>
<h6 id="10-sep-2015">10 Sep 2015</h6>
<p>During the last couple of years I’ve been building <a href="https://github.com/minos-org">yet another Linux distribution</a>, mostly to have my favorite software nicely packaged, but also to experiment and have fun =)</p>
<p>One important part of it is its configuration file, <strong>/etc/minos/config</strong> or <strong>~/.minos/config</strong>, e.g.</p>
<pre class="sh_sh">
wallpaper ~/data/images/wallpapers/sunlight.png
lock-wallpaper ~/data/images/wallpapers/lock.png
app-core mozilla-firefox mozilla-flashplayer
app-purge xinetd sasl2-bin sendmail sendmail-base sendmail-bin sensible-mda
</pre>
<p>I’ve chosen Debian/Ubuntu infrastructure for the initial implementation but probably will change it in the future (bedrock linux?). Anyway, since some of the parameters accept additional packages I’ve been having fun abusing the maintainer scripts to do so, this is how I’ve done it.</p>
<h2 id="locks">Locks</h2>
<p>Apt, rpm, and most package managers use locks to ensure its operations are as atomic as possible, it helps them to keep packages under control, so when trying to abuse maintainer scripts (on this case postinst) the first error to come up will be:</p>
<pre class="sh_sh">
E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)
E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?`
</pre>
<p>These files can be moved temporally to launch additional apt/dpkg instances, after some experimentation the list is as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/var/lib/dpkg/lock
/var/cache/apt/archives/lock
/var/lib/dpkg/updates/
</code></pre></div></div>
<p>Dpkg/apt-get uses a database in text plain located at <strong>/var/lib/dpkg/status</strong>, it’s kind of important to keep track of it too since the result of every apt/dpkg invocation is dumped to it upon completion (multiple backups are available at /var/backups/dpkg.status).</p>
<h2 id="post-execution">Post execution</h2>
<p>There seem to exist several options to abuse apt-get, cron jobs, daemons queues (aptdaemon?), custom waits, but all them require a considerable amount of time after the main apt-get/dpkg is done, what if the system go down short after?. I finally decided to install everything within the main apt-get process and merge changes at the end (that way it takes a couple of seconds processing the missing text operations instead of probably several minutes for further apt instances).</p>
<pre class="sh_sh">
#!/bin/sh
package=my-pkg
_dpkg_suspend_process() {
#unlock standard files
busybox mv /var/lib/dpkg/lock /var/lib/dpkg/lock.suspended
busybox rm -rf /var/lib/dpkg/updates.suspended/
busybox mv /var/lib/dpkg/updates/ /var/lib/dpkg/updates.suspended
busybox mkdir /var/lib/dpkg/updates/
busybox mv /var/cache/apt/archives/lock /var/cache/apt/archives/lock.suspended
#debconf missing file descriptors workaround
busybox cp /usr/share/debconf/confmodule /usr/share/debconf/confmodule.bk
busybox cp /usr/share/minos/debconf/confmodule /usr/share/debconf/confmodule
#while apt is being executed it modifies the status file which brings conflicts
#to new packages if they're installed/removed in abused apt instances, therefore
#the status-old file (which represent the original state in which the first
#apt instance was launched) is used to create temporal diffs which will be merged
#at the end
busybox cp /var/lib/dpkg/status /var/lib/dpkg/status.suspended
busybox cp /var/lib/dpkg/status-old /var/lib/dpkg/status-orig
busybox cp /var/lib/dpkg/status-orig /var/lib/dpkg/status
}
_dpkg_continue_process() {
#relock standard files
busybox rm -rf /var/lib/dpkg/updates
busybox mv /var/lib/dpkg/lock.suspended /var/lib/dpkg/lock
busybox mv /var/lib/dpkg/updates.suspended /var/lib/dpkg/updates
busybox mv /var/cache/apt/archives/lock.suspended /var/cache/apt/archives/lock
busybox mv /var/lib/dpkg/status.suspended /var/lib/dpkg/status
#debconf missing file descriptors workaround
busybox mv /usr/share/debconf/confmodule.bk /usr/share/debconf/confmodule
#keep status-old file to survive multiple abused apt instances
busybox mv /var/lib/dpkg/status-orig /var/lib/dpkg/status-old
}
_dpkg_sync_status_db() {
_dpkg_sync_status_db_script="/var/lib/dpkg/dpkg-sync-status-db"
_dpkg_sync_status_db_script_generator() {
printf "%s\\n" "#!/bin/sh"
printf "%s\\n" "#autogenerated by ${package}: $(date +%d-%m-%Y:%H:%M)"
printf "\\n"
printf "%s\\n" '##close stdout'
printf "%s\\n" '#exec 1<&-'
printf "%s\\n" '##close stderr'
printf "%s\\n" '#exec 2<&-'
printf "%s\\n" '##open stdout as $log_file file for read and write.'
printf "%s\\n" "#exec 1<> /tmp/${package}.\${$}.debug"
printf "%s\\n" '##redirect stderr to stdout'
printf "%s\\n" '#exec 2>&1'
printf "%s\\n" '#set -x #enable trace mode'
printf "\\n"
printf "%s\\n" "while fuser /var/lib/dpkg/lock >/dev/null 2>&1; do sleep 1; done"
printf "\\n"
printf "%s\\n" 'pkgs__add="$(cat /var/lib/apt/apt-add-queue)"'
printf "%s\\n" 'if [ -n "${pkgs__add}" ]; then'
printf "%s\\n" ' for pkg in $pkgs__add; do'
printf "%s\\n" ' if ! busybox grep "^Package: ${pkg}$" /var/lib/dpkg/status >/dev/null 2>&1; then'
printf "%s\\n" ' busybox sed -n "/Package: ${pkg}$/,/^$/p" \'
printf "%s\\n" " /var/lib/dpkg/status-append-queue >> /var/lib/dpkg/status"
printf "%s\\n" " fi"
printf "%s\\n" " done"
printf "%s\\n" "fi"
printf "\\n"
printf "%s\\n" 'pkgs__rm="$(cat /var/lib/apt/apt-rm-queue)"'
printf "%s\\n" 'if [ -n "${pkgs__rm}" ]; then'
printf "%s\\n" ' for pkg in $pkgs__rm; do'
printf "%s\\n" ' busybox sed -i "/Package: ${pkg}$/,/^$/d" /var/lib/dpkg/status'
printf "%s\\n" " done"
printf "%s\\n" "fi"
printf "\\n"
printf "%s\\n" "mv /var/lib/apt/apt-add-queue /var/lib/apt/apt-add-queue.bk"
printf "%s\\n" "mv /var/lib/apt/apt-rm-queue /var/lib/apt/apt-rm-queue.bk"
printf "%s\\n" "mv /var/lib/dpkg/status-append-queue /var/lib/dpkg/status-append-queue.bk"
printf "\\n"
printf "%s\\n" "rm -rf /var/lib/apt/apt-add-queue /var/lib/apt/apt-rm-queue"
printf "%s\\n" "rm -rf ${_dpkg_sync_status_db_script}"
}
_dpkg_sync_status_db_script_generator > "${_dpkg_sync_status_db_script}"
chmod +x "${_dpkg_sync_status_db_script}"
_daemonize /bin/sh -c "${_dpkg_sync_status_db_script}"
}
_daemonize() {
#http://blog.n01se.net/blog-n01se-net-p-145.html
[ -z "${1}" ] && return 1
( #1. fork, to guarantee the child is not a process
#group leader, necessary for setsid) and have the
#parent exit (to allow control to return to the shell)
#2. redirect stdin/stdout/stderr before running child
[ -t 0 ] && exec </dev/null
[ -t 1 ] && exec >/dev/null
[ -t 2 ] && exec 2>/dev/null
if ! command -v "setsid" >/dev/null 2>&1; then
#2.1 guard against HUP and INT (in child)
trap '' 1 2
fi
#3. ensure cwd isn't a mounted fs so it does't block
#umount invocations
cd /
#4. umask (leave this to caller)
#umask 0
#5. close unneeded fds
#XCU 2.7 Redirection says: open files are represented by
#decimal numbers starting with zero. The largest possible
#value is implementation-defined; however, all
#implementations shall support at least 0 to 9, inclusive,
#for use by the application.
i=3; while [ "${i}" -le "9" ]; do
eval "exec ${i}>&-"
i="$(($i + 1))"
done
#6. create new session, so the child has no
#controlling terminal, this prevents the child from
#accesing a terminal (using /dev/tty) and getting
#signals from the controlling terminal (e.g. HUP, INT)
if command -v "setsid" >/dev/null 2>&1; then
exec setsid "$@"
elif command -v "nohup" >/dev/null 2>&1; then
exec nohup "$@" >/dev/null 2>&1
else
if [ ! -f "${1}" ]; then
"$@"
else
exec "$@"
fi
fi
) &
#2.2 guard against HUP (in parent)
if ! command -v "setsid" >/dev/null 2>&1 \ &&
! command -v "nohup" >/dev/null 2>&1; then
disown -h "${!}"
fi
}
_apt_add_queue() {
for pkg in "${@}"; do
if busybox grep "${pkg}" /var/lib/apt/apt-rm-queue >/dev/null 2>&1; then
busybox sed -i "/^${pkg}$/d" /var/lib/apt/apt-rm-queue
else
if ! busybox grep "^Package: ${pkg}$" /var/lib/dpkg/status >/dev/null 2>&1; then
printf "%s\\n" "${pkg}" >> /var/lib/apt/apt-add-queue
fi
fi
done; unset pkg
}
_apt_rm_queue() {
for pkg in "${@}"; do
if busybox grep "${pkg}" /var/lib/apt/apt-add-queue >/dev/null 2>&1; then
busybox sed -i "/^${pkg}$/d" /var/lib/apt/apt-add-queue
else
if busybox grep "^Package: ${pkg}$" /var/lib/dpkg/status >/dev/null 2>&1; then
printf "%s\\n" "${pkg}" >> /var/lib/apt/apt-rm-queue
fi
fi
done; unset pkg
}
_apt_install() {
[ -z "${1}" ] && return
_apt_add_queue $(printf "%s\\n" "${@}" | busybox sed "s:${package}::g")
}
_apt_purge() {
[ -z "${1}" ] && return
_apt_rm_queue $(printf "%s\\n" "${@}" | busybox sed "s:${package}::g")
}
_apt_run() {
[ ! -f /var/lib/apt/apt-add-queue ] && [ ! -f /var/lib/apt/apt-rm-queue ] && return
pkgs__add="$(cat /var/lib/apt/apt-add-queue 2>/dev/null)"
if [ -n "${pkgs__add}" ]; then
_dpkg_suspend_process
busybox awk '/^Package: /{print $2}' /var/lib/dpkg/status | \
busybox sort > /var/lib/dpkg/status-pkgs.orig
_apt_run__output="$(DEBIAN_FRONTEND=noninteractive apt-get install \
--no-install-recommends -y -o Dpkg::Options::="--force-confdef" \
-o Dpkg::Options::="--force-confold" --force-yes ${pkgs__add} 2>&1)" || \
printf "%s\\n" "${_apt_run__output}" >&2
busybox awk '/^Package: /{print $2}' /var/lib/dpkg/status | \
busybox sort > /var/lib/dpkg/status-pkgs.current
_dpkg__added_pkgs="$(busybox diff -Naur /var/lib/dpkg/status-pkgs.orig \
/var/lib/dpkg/status-pkgs.current | busybox awk '/^\+[a-zA-Z]/{gsub("^+","");print;}')"
busybox rm -rf /var/lib/dpkg/status-pkgs*
#add dependencies
if [ -n "${_dpkg__added_pkgs}" ]; then
printf "%s\\n" "${_dpkg__added_pkgs}" >> /var/lib/apt/apt-add-queue
printf "%s\\n" "$(busybox sort /var/lib/apt/apt-add-queue | busybox uniq)" \
> /var/lib/apt/apt-add-queue
fi
#extract dpkg status output to append it at the end
for pkg in $_dpkg__added_pkgs; do
busybox sed -n '/Package: '"${pkg}"'$/,/^$/p' /var/lib/dpkg/status \
>> /var/lib/dpkg/status-append-queue
done
_dpkg_continue_process
fi
pkgs__rm="$(cat /var/lib/apt/apt-rm-queue 2>/dev/null)"
if [ -n "${pkgs__rm}" ]; then
_dpkg_suspend_process
busybox awk '/^Package: /{print $2}' /var/lib/dpkg/status | \
busybox sort > /var/lib/dpkg/status-pkgs.orig
_apt_run__output="$(DEBIAN_FRONTEND=noninteractive apt-get purge \
-y ${pkgs__rm} 2>&1)" || printf "%s\\n" "${_apt_run__output}" >&2
busybox awk '/^Package: /{print $2}' /var/lib/dpkg/status | \
busybox sort > /var/lib/dpkg/status-pkgs.current
_dpkg__removed_pkgs="$(busybox diff -Naur /var/lib/dpkg/status-pkgs.orig \
/var/lib/dpkg/status-pkgs.current | busybox awk '/^-[a-zA-Z]/{gsub("^-","");print;}')"
busybox rm -rf /var/lib/dpkg/status-pkgs*
#remove dependencies
if [ -n "${_dpkg__removed_pkgs}" ]; then
printf "%s\\n" "${_dpkg__removed_pkgs}" >> /var/lib/apt/apt-rm-queue
printf "%s\\n" "$(busybox sort /var/lib/apt/apt-rm-queue | busybox uniq)" \
> /var/lib/apt/apt-rm-queue
fi
_dpkg_continue_process
fi
_dpkg_sync_status_db
}
_apt_install additional packages
_apt_purge ugly packages
_apt_run
</pre>
<p>Not the most elegant solution but it’s works, I’ll leave it like that until I find a better alternative or change base distros.</p>
<p>Happy abusing 😋</p>
<ul>
<li><a href="http://stackoverflow.com/questions/18599599/apt-get-commands-from-within-a-deb-postinst">http://stackoverflow.com/questions/18599599/apt-get-commands-from-within-a-deb-postinst</a></li>
</ul>
papers I liked2015-08-10T00:00:00+00:00http://javier.io/blog/en/2015/08/10/papers-i-liked<h2 id="papers-i-liked">papers I liked</h2>
<h6 id="10-aug-2015">10 Aug 2015</h6>
<p>From time to time I read papers about any subject, mostly about computer science, and then forgot which or where it’s available for future reference. So I’m creating this list as a kind of personal wiki to do not forget anymore.</p>
<h3 id="computer-security">Computer Security</h3>
<ul>
<li>
<p><a href="http://www.iseclab.org/papers/sp2013privexec">PrivExec: Private Execution as an Operating System Service</a> - 2013. <a href="http://f.javier.io/rep/papers/sp2013privexec.pdf">[pdf]</a> 16 Pag. Kernel side private / temporal / irrecoverable execution environments.</p>
</li>
<li>
<p><a href="https://www.comp.nus.edu.sg/~liangzk/papers/tissec09.pdf">Alcatraz: An Isolated Environment for Experimenting with Untrusted Software</a> - 2009. <a href="http://f.javier.io/rep/papers/alcatraz-an-isolated-environment-for-experimenting-with-untrusted-software.pdf">[pdf]</a> 37 Pag. Commit + policy driven temporal sandboxing environments.</p>
</li>
<li>
<p><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.300.4042&rep=rep1&type=pdf">The state of the art of application restrictions and sandboxes and its shortfalls</a> - 2012. <a href="http://f.javier.io/rep/papers/state-of-art-of-application-restrictions-and-sandboxes-2012.pdf">[pdf]</a> 40 Pag. Compilation of current security trends and its shortfalls, eg: selective execution (by white/black lists, heuristic -antivirus software, statistic based -symantec quorum, spynet, mcafee artemis, etc); rule based (DAC, Linux standard, Rainbow, polaris); application oriented access control (mapbox, android, bitfrost, dte, apparmor, selinux, tomoyo, systrace, alcatraz, some web apps); isolation based {permanent (virtual machines - kvm,xen,uml,virtualbox, containers - chroot, lxe, openvz, linux vserver, jails), ephemeral (privexec), both (alcatraz, sandboxie, pastures, returnil), isolated two ways (virtual machines, containers), isolated one way (privexec,alcatraz)}; monitoring system calls (systrace, plash, callgraph, pulse); combinations (app oriented access control + isolation: qubes, windowbox, apiary, peadpod).</p>
</li>
<li>
<p><a href="http://www.cs.utexas.edu/~bwaters/publications/papers/www2005.pdf">Password Multiplier: A convenient method for securely managing passwords</a> - 2005. <a href="http://f.javier.io/rep/papers/Password%20Multiplier:%20A%20convenient%20method%20for%20securely%20managing%20passwords.pdf">[pdf]</a> 9 Pag. Hash based passwords.</p>
</li>
</ul>
<h3 id="computer-virtualization">Computer Virtualization</h3>
<ul>
<li><a href="http://marceloneves.org/papers/pdp2013-containers.pdf">Performance Evaluation of Container-based Virtualization for High Performance Computing Environments</a> - 2014. <a href="http://f.javier.io/rep/papers/pdp2013-containers.pdf">[pdf]</a> 8 Pag. Xen, Openvz, Linux Vserver, LXC performance evaluation.</li>
</ul>
<h3 id="computer-operation-systems">Computer Operation Systems</h3>
<ul>
<li><a href="http://lib.tkk.fi/Diss/2012/isbn9789526049175/isbn9789526049175.pdf">Flexible Operating System Internals: The Design and Implementation of the Anykernel and Rump Kernels</a> - 2012. <a href="http://f.javier.io/rep/papers/anykernel-rump-kernel-isbn9789526049175.pdf">[pdf]</a> 362 Pag. Portable drivers across minimal kernels (anykernels) and system applications (rump kernels). Drivers as system libraries.</li>
</ul>
<!--
-## Computer Networks
-
-[Maglev: A Fast and Reliable Software Network Load Balancer](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44824.pdf) - 2016.[[pdf]](http://f.javier.io/rep/papers/Maglev%20-%20A%20fast%20and%20reliable%20software%20network%20load%20balancer.pdf) 13 Pag. Google's distributed software based network load balancer on commodity hardware.
-->
<h3 id="community-driven-papers-repositories">Community driven papers repositories</h3>
<ul>
<li><a href="https://github.com/papers-we-love/papers-we-love">https://github.com/papers-we-love/papers-we-love</a></li>
<li><a href="http://www.reddit.com/r/paperswelove">http://www.reddit.com/r/paperswelove</a></li>
</ul>
<p>By the way, if you’re new (as I’m) to read scientific papers, don’t forget to checkout the guidelines to read efficiently academic articles.</p>
<ul>
<li><a href="http://organizationsandmarkets.com/2010/08/31/how-to-read-an-academic-article/">How to read an academic article</a></li>
<li><a href="http://violentmetaphors.com/2013/08/25/how-to-read-and-understand-a-scientific-paper-2/">How to read and understand a scientific paper</a></li>
</ul>
<p>Happy reading 😋</p>
tundle, a tmux plugin manager2015-06-29T00:00:00+00:00http://javier.io/blog/en/2015/06/29/tundle-tmux-plugin-manager<h2 id="tundle-a-tmux-plugin-manager">tundle, a tmux plugin manager</h2>
<h6 id="29-jun-2015">29 Jun 2015</h6>
<p>In the past I’ve been a regular <a href="http://byobu.co/">byobu</a> user, a distribution for common terminal multiplexers (<a href="http://tmux.github.io/">tmux</a>, <a href="https://www.gnu.org/software/screen/">screen</a>). A terminal multiplexer is a utility who allows you to manage several sessions and windows within the same program, kind of a window manager for the console. In my case I mostly use it to improve the robustness of remote ssh connections. In default ssh sessions if you lose the connection you lose your work, with terminal multiplexers you can ‘dettach/attach’ eternal living sessions which is quite useful to keep movility.</p>
<p><br />
<strong><img src="/assets/img/tundle.gif" alt="" /></strong></p>
<p>Unfortunately, like vi/emacs, the default screen/tmux settings are quite bad, so many people either personalize heavily its own settings or use a distribution/plugin system.</p>
<p>I used to use byobu because of its ease of installation (at least on Ubuntu) and default status bar. However, for my needs, it looked overwhealming and was difficult to modify, I prefer systems with a plugin centric approach (like <a href="https://github.com/javier-lopez/vundle">vim + vundle</a>, or <a href="http://javier.io/blog/en/2013/11/15/shundle.html">sh + shundle</a>), so at the end I decided to migrate. Since tmux is way better than screen, I focused on it.</p>
<p>There is a recent attempt to create a general tmux plugin environment:</p>
<ul>
<li><a href="https://github.com/tmux-plugins/tpm">tpm</a></li>
<li><a href="https://github.com/tmux-plugins">tmux-plugins</a></li>
</ul>
<p>Tpm and plugins is a great effort to cover the missing tmux features through an organized plugin system, it covers a fair amount of functionality and allows good granulity between plugins, however it also has its drawbacks, the most important ones for me are its dependency on bash and recent tmux releases (>=1.9), and its inability to install other but the latest version of any plugin (what if I want and older version with less features but more stability?). The tpm maintainer is a great guy however these issues are not at its top list and considering the amount of refactoring required to unmarry tpm from bash/specific tmux features, I finally decided to go my own, that’s how <a href="https://github.com/javier-lopez/tundle">tundle</a> was born, an alternative tmux plugin environment with compatibility and control version in mind.</p>
<p>I’ve gone to a great lenght to ensure tundle runs in as many platforms as possible (at least where tmux is available) degrading slowly depending in the tmux features available (right now running in tmux >= 1.6 , older versions can be discussed), in addition to improved portability/performance it’s now possible to install plugins by git hash ensuring you only run code you trust. All other features are similiar, with the tipical fix here and there result of a complete code review.</p>
<h3 id="quick-start">Quick start</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone --depth=1 https://github.com/javier-lopez/tundle ~/.tmux/plugins/tundle
</code></pre></div></div>
<p>After installing tundle additional bundle/plugin modules can be defined at <code class="language-plaintext highlighter-rouge">~/.tmux.conf</code></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>run-shell "~/.tmux/plugins/tundle/tundle"
#let tundle manage tundle, required!
setenv -g @bundle "javier-lopez/tundle"
#from GitHub
#you can specify a branch or commit sha checksum
setenv -g @bundle "javier-lopez/tundle-plugins/tmux-sensible:c7b09"
setenv -g @bundle "gh:javier-lopez/tundle-plugins/tmux-pain-control"
setenv -g @bundle "github:javier-lopez/tundle-plugins/tmux-resurrect"
</code></pre></div></div>
<p>And installed by starting <code class="language-plaintext highlighter-rouge">tmux</code> and pressing <code class="language-plaintext highlighter-rouge">Ctrl-b + I</code> or running <code class="language-plaintext highlighter-rouge">~/.tmux/plugins/tundle/scripts/install_plugins.sh</code></p>
<p>Tundle is able to install and run tpm plugins as well, but if you do so, portability is lost since tpm plugins will only work in tmux >= 1.9 and bash, if that’s not a problem go ahead you still will get extra syntax sugar and version control over your tmux environment.</p>
<p>Additional tundle plugins are available at:</p>
<ul>
<li><a href="https://github.com/javier-lopez/tundle-plugins">tundle-plugins</a></li>
</ul>
<p>Happy multiplexing 😋</p>
static-get: linux static binaries for lazy people2015-06-23T00:00:00+00:00http://javier.io/blog/en/2015/06/23/static-get<h2 id="static-get-linux-static-binaries-for-lazy-people">static-get: linux static binaries for lazy people</h2>
<h6 id="23-jun-2015">23 Jun 2015</h6>
<p><a href="http://javier.io/blog/en/2015/02/27/wget-finder.html">Lastly</a> I’ve required static versions of common linux utilities, it’s been fun to compile them a couple of times but it gets boring pretty quickly, so I’ve decided to create a repository with all the static recipes I’ve found on Internet (<a href="https://github.com/jelaas/bifrost-build">bifrost</a>, <a href="http://morpheus.2f30.org/">morpheus</a>, <a href="https://github.com/minos-org/minos-static/tree/master/misc-autosync-resources">etc</a>).</p>
<p>Now I can get <em>git static</em> with:</p>
<pre class="sh_sh">
$ static-get git
git-1.9.2.tar.xz
$ static-get -x git #download and extract in one go
git-1.9.2.tar.xz
git-1.9.2/
$ sh <(wget -qO- s.minos.io/s) -x git #retrieve the installer, download the target and extract in one go
</pre>
<p>To get a list of all available packages, you can run:</p>
<pre class="sh_sh">
$ static-get --search
</pre>
<p>Be aware than using static binaries have its <a href="http://www.akkadia.org/drepper/no_static_linking.html">drawbacks</a>, I take no responsability for any damage caused by any binary downloaded with <a href="https://raw.githubusercontent.com/minos-org/minos-static/master/static-get">static-get</a>.</p>
<p>That’s it, happy fetching 😋</p>
<ul>
<li><a href="https://github.com/jelaas/bifrost-build">bifrost</a></li>
<li><a href="http://morpheus.2f30.org/">morpheus</a></li>
<li><a href="https://github.com/sabotage-linux/sabotage">sabotage, not real static recipes</a></li>
<li><a href="http://portablelinuxapps.org">portablelinuxapps, not real static recipes</a></li>
<li><a href="https://github.com/minos-org/minos-static">https://github.com/minos-org/minos-static</a></li>
<li><a href="https://www.janhouse.lv/blog/linux/building-static-binaries-on-linux/">https://www.janhouse.lv/blog/linux/building-static-binaries-on-linux/</a></li>
</ul>
sentry, an alternative to fail2ban and other bruteforce blocking daemons2015-03-25T00:00:00+00:00http://javier.io/blog/en/2015/03/25/sentry-an-alternative-to-bruteforce-blocking-daemons<h2 id="sentry-an-alternative-to-fail2ban-and-other-bruteforce-blocking-daemons">sentry, an alternative to fail2ban and other bruteforce blocking daemons</h2>
<h6 id="25-mar-2015">25 Mar 2015</h6>
<p>I’ve just migrated my servers from using fail2ban to sentry, and it feels quite efficient =), so I’m doing this post as a way to increase sentry awareness.</p>
<p>Sentry is a program who detects and prevents bruteforce attacks against sshd and other network services using minimal system resources. Instead of running a daemon who constantly reads log files it runs a perl script who uses tcpwrappers for tracking connections and blocking access by ip, tcpwrappers is already installed in most modern UNICES systems (Linux, Mac OSX and FreeBSD). So if you additionally have perl installed it adds 0 dependencies.</p>
<h3 id="installation">Installation</h3>
<h2 id="ubuntu--minos">Ubuntu | Minos</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo add-apt-repository ppa:minos-archive/main
$ sudo apt-get update && sudo apt-get install sentry
</code></pre></div></div>
<h2 id="others">Others</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ wget http://www.tnpi.net/internet/sentry.pl
$ sudo perl sentry.pl
$ echo "sshd : /var/db/sentry/hosts.deny : deny" > hosts
$ echo "sshd : ALL : spawn /var/db/sentry/sentry.pl -c --ip=%a : allowsendmail: all" >> hosts
$ cat hosts /etc/hosts.allow > hosts.allow
$ sudo mv hosts.allow /etc/ && rm hosts
</code></pre></div></div>
<h3 id="usage">Usage</h3>
<p>Upon installation it doesn’t require anything else, it’ll just works, to see some statistics run:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo /var/db/sentry/sentry.pl -r
no IP, skip info
-------- summary ---------
42 unique IPs have connected 190 times
1 IPs are whitelisted
38 IPs are blacklisted
</code></pre></div></div>
<p>To see blocked IPs</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo head -3 /var/db/sentry/hosts.deny
ALL: 103.41.124.119 : deny
ALL: 103.41.124.136 : deny
ALL: 115.230.124.208 : deny
</code></pre></div></div>
<p>The list can be edited either manually or through the –whitelist, –blacklist and –delist sentry.pl options</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo /var/db/sentry/sentry.pl --ip=103.41.124.119 --delist
$ sudo /var/db/sentry/sentry.pl --ip=103.41.124.119 --whitelist
$ sudo /var/db/sentry/sentry.pl --ip=103.41.124.119 --delist
$ sudo /var/db/sentry/sentry.pl --ip=103.41.124.119 --blacklist
</code></pre></div></div>
<p>That’s it, happy blocking 😋</p>
<ul>
<li><a href="https://www.tnpi.net/wiki/Sentry">https://www.tnpi.net/wiki/Sentry</a></li>
</ul>
pianocat2015-03-12T00:00:00+00:00http://javier.io/blog/en/2015/03/12/pianocat<h2 id="pianocat">pianocat</h2>
<h6 id="12-mar-2015">12 Mar 2015</h6>
<p>Lastly I’ve been wondering why some music tones are so sticky, with this in mind, I’ve enrolled in “<a href="https://class.coursera.org/introclassicalmusic-001">Introduction to Classical Music</a>” on Coursera (which I totally recommend) and read about how musical notation and notes work.</p>
<p>It turned out than a piano is quite useful when learning these matters, unfortunately I don’t own one and have no plans to get any soon, so I decided to emulate it, as often happens, <a href="https://raw.githubusercontent.com/ssshake/console4kids/master/piano">someone had already worked in something similar</a>, so I took the work and adapt it to me and that’s how pianocat was born.</p>
<pre class="sh_sh">
$ #basic tone
$ echo "D4 F4 - G4 A4 - A#4 A4 G4 - E4 C4 - D4 E4 F4" \
"- D4 D4 - C#4 D4 E4 - C#4 C#4 - D4 F4 - G4 A4 - A#4" \
"A4 G4 - E4 C4 - D4 E4 F4 - E4 D4 C#4 - C#4 D4 - - D4" | pianocat
$ #a more elaborated version of the previous melody
$ echo "T:4/4 L:1/4 D4 F4:2 ! G4 A4:2 ! A#4:.5 A4:.5" \
"G4:2 ! E4 C4:2 ! D4:.5 E4:.5 F4:2 ! D4 D4:2 ! C#4:.5" \
"D4:.5 E4:2 ! C#4 C#4:2 ! D4 F4:2 ! G4 A4:2 ! A#4:.5" \
"A4:.5 G4:2 ! E4 C4:2 ! D4:.5 E4:.5 F4:2 ! E4:.5 D4:.5" \
"C#4:2 ! C#4 D4:2 - D4:4" | pianocat
</pre>
<p><a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/pianocat">Pianocat</a> can also be used in interactive mode:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pianocat
_______________________________________________
| | | | | | | | | | | | | | | | | | | | |
| | | | | | | | | | | | | | | | | | | | |
| |w| |r| | |t| |y| |u| | |o| |p| | |+| |
| |_| |_| | |_| |_| |_| | |_| |_| | |_| |
| | | | | | | | | | | | |
| a | s | d | f | g | h | j | k | l | ñ | { | } |
|___|___|___|___|___|___|___|___|___|___|___|___|
Press any key to play, 1..7 to select an octave
(by default 4) or Esc to exit
>
</code></pre></div></div>
<p>The sounds is quite bad, but it works so I’m leaving it like this for now. Thanks to the sox developers, to <a href="https://github.com/ssshake">ssshake</a> for the initial snippet and to <a href="https://github.com/s-d-m">Sam da Mota</a> for additional comments and pianoterm awareness.</p>
<p>UPDATE: 2015-03-19</p>
<p>Thanks to Martin Capodici pianocat now is able to play real piano tunes =)!, to use them follow the next procedure:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone --depth=1 git@github.com:javier-lopez/pianosounds.git
$ cd pianosounds #or mv pianosounds ~/.pianocat
$ pianocat
</code></pre></div></div>
<p>If you don’t have git, try:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ wget http://f.javier.io/rep/audio/pianosounds.tar.xz #or
$ wget http://f.javier.io/rep/audio/pianosounds.tar.bz2 #or
$ wget http://f.javier.io/rep/audio/pianosounds.tar.gz
</code></pre></div></div>
<p>That’s it, happy humming 😋</p>
<ul>
<li><a href="https://raw.githubusercontent.com/ssshake/console4kids/master/piano">https://raw.githubusercontent.com/ssshake/console4kids/master/piano</a></li>
<li><a href="https://github.com/s-d-m/pianoterm">https://github.com/s-d-m/pianoterm</a></li>
</ul>
wget-finder for packagers2015-02-27T00:00:00+00:00http://javier.io/blog/en/2015/02/27/wget-finder<h2 id="wget-finder-for-packagers">wget-finder for packagers</h2>
<h6 id="27-feb-2015">27 Feb 2015</h6>
<p>Since some days ago I’ve been playing with <a href="https://github.com/jelaas/bifrost-build">bifrost-build</a>, a github repository with recipes for building static linux binaries.</p>
<p>The recipes are no different from other linux distributions where an archive hardcoded (containing the original source code) needs to be downloaded and match an specific hash (in this case a md5sum) to continue the build.</p>
<p>While I was reviewing the recipes I noticed than some origins wheren’t available anymore. Fortunately there are plenty of mirrors for the most common utils installed with any linux system, so it wasn’t difficult to find alternative urls from where to fetch the missing bits (thanks mirrors and bifrost-build!).</p>
<p>After completing the builds I though that it shouldn’t be too difficult to automate this task, to create a program who could search and download an specific archive matching a checksum. That’s how <a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/wget-finder">wget-finder</a> was born.</p>
<pre class="sh_sh">
$ wget-finder
Usage: wget-finder [OPTION]... FILE:CHECKSUM...
</pre>
<p>The idea is simple, wget-finder will search for files e.g. socat-1.7.2.0.tar.gz matching an specific checksum(it supports md5, sha1, sha256 and sha512) on different search engines (currently google, and ftplike, more engines are welcome!) and will download the appropiate (actually it will download some of them till the checksum matches)</p>
<pre class="sh_sh">
$ wget-finder socat-1.7.2.0.tar.gz:0565dd58800e4c50534c61bbb453b771
socat-1.7.2.0.tar.gz
$ wget-finder -O libssh.tar.gz libssh2-1.3.0.tar.gz:6425331899ccf1015f1ed79448cb4709
libssh.tar.gz
</pre>
<p>That’s it, happy fetching 😋</p>
fix github rendering in old firefox releases with greasemonkey2015-02-11T00:00:00+00:00http://javier.io/blog/en/2015/02/11/github-rendering-firefox-greasemonkey<h2 id="fix-github-rendering-in-old-firefox-releases-with-greasemonkey">fix github rendering in old firefox releases with greasemonkey</h2>
<h6 id="11-feb-2015">11 Feb 2015</h6>
<p>When the new firefox interface (aurora) was announced I knew I would never install it, since then I’ve been looking for alternatives, in the meantime I’ve been using an <a href="http://f.javier.io/rep/bin/firefox32.tar.bz2">old</a> <a href="http://f.javier.io/rep/bin/firefox64.tar.bz2">firefox</a> release (27.0), it’s been great so far, however some days ago <a href="https://github.com">https://github.com</a> started looking funny. Paniiiic 😱!</p>
<p><strong><a href="/assets/img/102.png"><img src="/assets/img/102.png" alt="" /></a></strong></p>
<p>I contacted support but they told me they didn’t support such old releases (and think it’s barely 1 year old, pff, progress…) so I went ahead and hacked a quick greasemonkey script.</p>
<pre class="sh_javascript">
function addGlobalStyle(css) {
var head, style;
head = document.getElementsByTagName('head')[0];
if (!head) { return; }
style = document.createElement('style');
style.type = 'text/css';
style.innerHTML = css;
head.appendChild(style);
}
//github.com/user
addGlobalStyle('.one-fourth {width: 20%}');
addGlobalStyle('.one-half {width: 47%}');
addGlobalStyle('img.avatar {max-width: 200px; max-height: 200px;}');
//github.com
addGlobalStyle('.two-thirds {width: 63%}');
addGlobalStyle('.site-search input[type="text"] {width: 90%}');
</pre>
<p><strong><a href="/assets/img/103.png"><img src="/assets/img/103.png" alt="" /></a></strong></p>
<p>That’s it, happy collaborating 😋</p>
youtube videos from terminal2015-01-10T00:00:00+00:00http://javier.io/blog/en/2015/01/10/youtube-videos-from-terminal<h2 id="youtube-videos-from-terminal">youtube videos from terminal</h2>
<h6 id="10-jan-2015">10 Jan 2015</h6>
<p>There are multiple ways to see youtube videos from a linux terminal, one of the simplest (and more unix ways) is with mplayer+youtube-dl. Mplayer for playing and youtube-dl for fetching the content.</p>
<p>To do so, go to a shell terminal and define the following alias:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ alias youtube-slice='sh -c '\''youtube-dl -q -o- "${1}" | mplayer -cache 8192 -'\'' -'
</code></pre></div></div>
<p>After that:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ youtube-slice $url
</code></pre></div></div>
<p>Will work nicely. If you are interested in this and more cool aliases, checkout aliazator, it’s tons of handy ones waiting for you to discover them.</p>
<ul>
<li><a href="http://rg3.github.io/youtube-dl/download.html">youtube-dl</a></li>
<li><a href="https://github.com/javier-lopez/shundle-plugins/tree/master/aliazator">aliazator</a></li>
</ul>
<p>Happy watching 😋</p>
using vim objects in bash2014-11-16T00:00:00+00:00http://javier.io/blog/en/2014/11/16/vim-objects-in-bash<h2 id="using-vim-objects-in-bash">using vim objects in bash</h2>
<h6 id="16-nov-2014">16 Nov 2014</h6>
<p>I’ve been using <a href="http://www.catonmat.net/blog/bash-vi-editing-mode-cheat-sheet/">vi-mode</a> in bash for a couple of years now, more than once I’ve tried to edit something with <strong>ci”</strong>, <strong>ca(</strong>, or any other popular <a href="http://blog.carbonfive.com/2011/10/17/vim-text-objects-the-definitive-guide/">vim object</a>.</p>
<p>This week decided to go further and see how to do it, and it turned out to be possible =), so if you’ve missed this feature too you can now enjoy it by following this procedure</p>
<h3 id="setup">Setup</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo add-apt-repository ppa:minos-archive/main #only Ubuntu LTS releases
$ sudo apt-get update && sudo apt-get install bash-minos-settings
$ echo 'set -o vi' >> ~/.bashrc
</code></pre></div></div>
<p>If you’re not interested in installing the whole enchilada or aren’t running an apt-get powered OS, you can get the raw inputrc file at:</p>
<ul>
<li><a href="https://github.com/minos-org/bash-minos-settings/blob/master/etc.inputrc">https://github.com/minos-org/bash-minos-settings/blob/master/etc.inputrc</a></li>
</ul>
<p>In the later case, the inputrc file should be placed at <strong>~/.inputr</strong> or <strong>/etc/inputrc</strong></p>
<p>Happy editing 😋</p>
hints for writing unix tools with shell scripting2014-10-21T00:00:00+00:00http://javier.io/blog/en/2014/10/21/hints-in-writing-unix-tools-with-shell-scripting<h2 id="hints-for-writing-unix-tools-with-shell-scripting">hints for writing unix tools with shell scripting</h2>
<h6 id="21-oct-2014">21 Oct 2014</h6>
<p>Yesterday I started my day reading <a href="http://monkey.org/~marius/unix-tools-hints.html">Hints for writing Unix tools</a>. And since I agree to a great extend I though in giving more details about how to build such tools with my favorite language. I’d really enjoy reading similar entries aimed to other langs.</p>
<h2 id="consume-input-from-stdin-produce-output-to-stdout">Consume input from stdin, produce output to stdout</h2>
<p>In Unix you can usually refer to stdin and stdout using file descriptors 1 and 2, we do it all the time, for example to send all errors in the <code class="language-plaintext highlighter-rouge">find</code> command to /dev/null you can type:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ find / -name "*pattern*" 2>/dev/null
</code></pre></div></div>
<p>And it so happens than bash/zsh/sh/ and probably many other shells can test if a fd is open and associated to a terminal with the <code class="language-plaintext highlighter-rouge">-t</code> test.</p>
<p>With this knowledge consuming input and modifying the behavior of your programs to act different depending to where it goes (pipe, file, stdout) is as easy as testing if the appropriate fd is active. For instance, to consume the standard input in your programs the following will work if placed properly (before parsing options?):</p>
<pre class="sh_sh">
if [ ! -t 0 ]; then
#there is input comming from pipe or file, add to the end of $@
set -- "${@}" $(cat)
fi
</pre>
<p>To control the output, you can test for fd 1 as in this example,</p>
<pre class="sh_sh">
if command -v "xclip" >/dev/null 2>&1 || [ -t 1 ]; then
printf "%s\\n" "${_translate_var_result}" | xclip -selection clipboard && xclip -o -selection clipboard
else
printf "%s\\n" "${_translate_var_result}"
fi
</pre>
<p>The above will allow to use <a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/translate">translate</a> in the following ways:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ translate hola
$ echo hola | translate
$ echo hola | translate | sed "s:$: world:"
</code></pre></div></div>
<h2 id="output-should-be-free-from-header-or-other-decoration">Output should be free from header or other decoration</h2>
<p>Adding options in shell scripts are easy, if you like adding extra sugar to your output, consider doing it within them, some examples are; -v, –verbose, -a, –all, etc, but by default try to output the simplest response, consider <a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/howdoi">howdoi</a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ howdoi extract a tar.bz2 package in unix
tar -xjf /path/to/archive.tar.bz
$ howdoi -a extract a tar.bz2 package in unix
use the -j option of tar.
tar -xjf /path/to/archive.tar.bz
-----
If it's really an old bzip 1 archive, try:
bunzip archive.tar.bz
and you'll have a standard tar file.
Otherwise, it's the same as with .tar.bz2 files.
-----
http://stackoverflow.com/questions/9454929/how-can-i-untar-a-tar-bz-file-in-unix
$ howdoi -l extract a tar.bz2 package in unix
http://stackoverflow.com/questions/9454929/how-can-i-untar-a-tar-bz-file-in-unix
</code></pre></div></div>
<p>Global vars are a good way to track output options.</p>
<pre class="sh_sh">
for arg; do #parse options
case "${arg}" in
-a) AFLAG="set"; shift;;
-l) LFLAG="set"; shift;;
-c) CFLAG="set"; shift;;
</pre>
<h2 id="treat-a-tools-output-as-an-api">Treat a tool’s output as an API</h2>
<p>You can create tests to ensure that your output format doesn’t change and actually works. There are <a href="http://shunit.sourceforge.net/">several</a> <a href="http://bmizerany.github.io/roundup/">test suites</a> capable of managing <a href="https://github.com/lehmannro/assert.sh">shell</a> <a href="http://joyful.com/shelltestrunner/">scripts</a>, but one of the simplest is <a href="http://fossies.org/linux/shtool/test.sh">shtool test suite</a> by Ralf S. Engelschall.</p>
<p>Let’s retake the previous script and add some tests:</p>
<pre class="sh_sh">
@begin{howdoi}
howdoi; test X"${?}" = X"1"
printf "%s" '-h' | howdoi; test X"${?}" = X"1"
howdoi --help ; test X"${?}" = X"1"
howdoi --cui; test X"${?}" = X"1"
test X"$(howdoi 2>&1|head -1)" = X"Usage: howdoi [options] query ..."
test X"$(howdoi -h 2>&1|head -1)" = X"Usage: howdoi [options] query ..."
test X"$(printf "%s" '--help' | howdoi 2>&1|head -1)" = X"Usage: howdoi [options] query ..."
test X"$(howdoi -cui 2>&1|head -1)" = X"howdoi: unrecognized option \`-cui'"
test X"$(howdoi -n 2>&1|head -1)" = X"Option \`-n' requires a parameter"
test X"$(howdoi -n cui 2>&1|head -1)" = X"Option \`-n' requires a number: 'cui'"
test X"$(howdoi XaMTWGfu89iQpJk6 2>&1|head -1)" = X"howdoi: No results"
test X"$(howdoi -C 2>&1)" = X"Cache cleared successfully"
test ! -d ~/.cache/howdoi
test X"$(howdoi XaMTWGfu89iQpJk6 2>&1|head -1)" = X"howdoi: No results"
test -d ~/.cache/howdoi
@end
</pre>
<p>If you include the output format in your tests it would be harder to change it continuously.</p>
<h2 id="place-diagnostics-output-on-stderr">Place diagnostics output on stderr.</h2>
<p>This one is really easy, adding <code class="language-plaintext highlighter-rouge">>&2</code> to all diagnostic, help and verbose messages will do it.</p>
<pre class="sh_sh">
#before
printf "%s\\n" "$(expr "${0}" : '.*/\([^/]*\)'): unrecognized option '${arg}'"
#after
printf "%s\\n" "$(expr "${0}" : '.*/\([^/]*\)'): unrecognized option '${arg}'" >&2
</pre>
<h2 id="signal-failure-with-an-exit-status">Signal failure with an exit status.</h2>
<p>The current status can be set in bash/zsh/sh with either <code class="language-plaintext highlighter-rouge">true</code>, <code class="language-plaintext highlighter-rouge">: (true)</code>, <code class="language-plaintext highlighter-rouge">false</code>, <code class="language-plaintext highlighter-rouge">return</code> or <code class="language-plaintext highlighter-rouge">exit</code></p>
<p>The first three can be used to set the current status in iterations, e,g.</p>
<pre class="sh_sh">
_rdeps()
{
[ -z "${1}" ] && return 1
for _rdeps_var_binary; do
fpath="$(command -v "${_rdeps_var_binary}")"
[ -z "${fpath}" ] && continue
if ldd "${fpath}" >/dev/null 2>/dev/null; then
ldd "${fpath}" | sort -n | uniq | awk '{print $1}' | xargs -i apt-file search {} | cut -d':' -f1 | sort | uniq
else
printf "$(expr "${0}" : '.*/\([^/]*\)'): %s\\n" "not a dynamic executable '${fpath}'" >&2 && false
fi
done
}
</pre>
<p>The above code will set the status to 1 without necessary quitting or returning from the function, except when no parameter is present</p>
<p><code class="language-plaintext highlighter-rouge">exit #number</code> can be used at any to exit the program with the specified status, it’s quite useful when testing for dependencies and exit with error if any of them is not available, e.g.</p>
<pre class="sh_sh">
if ! command -v "curl" >/dev/null 2>&1; then
printf "%s\\n" "you need to install 'curl' to run this program" >&2
exit 1
fi
</pre>
<h2 id="omit-needless-diagnostics">Omit needless diagnostics.</h2>
<p>As stated in <em>Omit needless diagnostics.</em> output should be as clear and simple as possible, a verbose function can be defined and used as follows:</p>
<pre class="sh_sh">
_verbose()
{
[ -z "${1}" ] && return 1
[ -n "${VFLAG}" ] && printf "%b\\n" "${*}"
}
for arg; do #parse options
case "${arg}" in
-v|--verbose) VFLAG="set"; shift;;
...
esac
_verbose "detailed message"
</pre>
<p>And for debugging, <code class="language-plaintext highlighter-rouge">set -x</code> will help to see most of the issues most of the times.</p>
<h2 id="avoid-making-interactive-programs">Avoid making interactive programs</h2>
<p>Doing interactive programs in shell scripting is actually harder than parsing cli arguments and outputting simple strings. So it shouldn’t be difficult to follow this principle, but if you still want breaking it, ensure interactive is only an additional mode and you still have a batch one.</p>
<p>Happy tooling 😋</p>
<ul>
<li><a href="http://monkey.org/~marius/unix-tools-hints.html">http://monkey.org/~marius/unix-tools-hints.html</a></li>
<li><a href="https://github.com/javier-lopez/learn/blob/master/sh/guideline.md">Personal guidelines</a></li>
<li><a href="http://f.javier.io/rep/books/Beginning_shell_scripting.pdf">Beginning shell scripting</a></li>
<li><a href="https://github.com/javier-lopez/learn/tree/master/sh">Shell scripts following exposed advices</a></li>
</ul>
ssh into a guest vbox machine on NAT mode2014-09-13T00:00:00+00:00http://javier.io/blog/en/2014/09/13/ssh-into-guess-virtualbox-using-nat<h2 id="ssh-into-a-guest-vbox-machine-on-nat-mode">ssh into a guest vbox machine on NAT mode</h2>
<h6 id="13-sep-2014">13 Sep 2014</h6>
<p>This is a quick reminder to myself, for this to work a ssh server must be running in the guest machine.</p>
<h3 id="configuration">Configuration</h3>
<p>In the VM network panel, click in <strong>advanced</strong> and then in the <strong>Port Forwarding</strong> button, there setup the next rule:</p>
<ul>
<li>Host IP: 127.0.0.1</li>
<li>Host Port: 2222</li>
<li>Guest IP: 10.0.0.2 (or the ip of the guest machine)</li>
<li>Guest Port: 22 (or personalized ssh port)</li>
</ul>
<p>Save changes</p>
<h3 id="usage">Usage</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh -p 2222 user@localhost
</code></pre></div></div>
<p>Happy remote working 😋</p>
<ul>
<li><a href="https://forums.virtualbox.org/viewtopic.php?f=8&t=55766">https://forums.virtualbox.org/viewtopic.php?f=8&t=55766</a></li>
</ul>
strip mp3 tags with ffmpeg2014-08-11T00:00:00+00:00http://javier.io/blog/en/2014/08/11/strip-mp3-tags-with-ffmpeg<h2 id="strip-mp3-tags-with-ffmpeg">strip mp3 tags with ffmpeg</h2>
<h6 id="11-aug-2014">11 Aug 2014</h6>
<p>I use mpd to satisfy my local music player needs, mpd reads multimedia tags and attaches them to its database, I use these tags to look for tracks and artists fastly, however sometimes I end with mp3 files containing unuseful tags, on these cases I wish mpd could look for filenames instead of mp3 tags because when it doesn’t it makes incredible difficult to find these tracks. Since I’ve not managed to find this feature (if such feature exist) I just strip the problematic tags (someday I’ll learn to edit them instead or to program the missing part).</p>
<pre class="sh_sh">
$ ffmpeg -i track.mp3 -acodec copy -map_metadata -1 track.t.mp3 && mv track.t.mp3 track.mp3
</pre>
<p>If you find the above command useful you can create an alias/function.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>alias strip.mp3tags='sh -c '\''ffmpeg -i "$1" -acodec copy -map_metadata -1 "$1".t.mp3 && mv "${1}".t.mp3 "${1}"'\'' -'
</code></pre></div></div>
<p>Happy stripping 😋</p>
detect file moves and renames with rsync2014-08-06T00:00:00+00:00http://javier.io/blog/en/2014/08/06/rsync-rename-move<h2 id="detect-file-moves-and-renames-with-rsync">detect file moves and renames with rsync</h2>
<h6 id="06-aug-2014">06 Aug 2014</h6>
<p>I use rsync to backup my <em>$HOME</em> directory everyday with something like this:</p>
<pre class="sh_sh">
$ sudo rsync -az --one-file-system --delete $HOME/ admin@backup.javier.io:~/backup/$(hostname)
</pre>
<p>Most of the times it takes me less than <strong>10</strong> minutes at 2MB/s to re-sync everything, however last weekend it took almost <strong>20 hours!</strong> so while I was waiting I decided to take a look to see what was happening. It turned out rsync was re-uploading some pretty heavy files because I had renamed them locally. I couldn’t believe rsync was so dumb, I was shocked O_O</p>
<p>So I went to Internet and looked for solutions, fortunatelly some other guys had found this problem before and created a couple of patches:</p>
<ul>
<li><a href="https://attachments.samba.org/attachment.cgi?id=7435">detect-renamed</a></li>
<li><a href="https://git.samba.org/?p=rsync-patches.git;a=blob;f=detect-renamed-lax.diff;h=4cd23bd4524662f1d0db0bcc90336a77d0bb61c9;hb=HEAD">detect-renamed-lax</a></li>
</ul>
<p>These patches add the following options:</p>
<ul>
<li>–detect-renamed, –detect-renamed-lax</li>
<li>–detect-moved</li>
</ul>
<p>Since I’m not the kind of person who enjoys spending their time compiling software I packaged a modified rsync version for supported Ubuntu LTS versions and upload it somewhere, while doing it I updated the patches to make them compile with the latest rsync version (at the moment of writing this version 3.1.1).</p>
<pre class="sh_sh">
$ sudo apt-add-repository ppa:minos-archive/main
$ sudo apt-get update && sudo apt-get install rsync
</pre>
<p>In my personal tests the modified rsync shows an amazing speed up for uploads who involve renamed/moved files, so I’m installing this in all my computers.</p>
<p><strong>Note: For this to work, both, server and client must have installed the modified rsync version</strong></p>
<p>Happy uploading 😋</p>
<p>References:</p>
<ul>
<li><a href="https://bugzilla.samba.org/show_bug.cgi?id=2294">https://bugzilla.samba.org/show_bug.cgi?id=2294</a></li>
<li><a href="https://bugs.launchpad.net/ubuntu/+source/rsync/+bug/1353792">https://bugs.launchpad.net/ubuntu/+source/rsync/+bug/1353792</a></li>
<li><a href="https://github.com/javier-lopez/learn/blob/master/patches/rsync-3.1.1-trusty-detect-renamed.diff">https://github.com/javier-lopez/learn/blob/master/patches/rsync-3.1.1-trusty-detect-renamed.diff</a></li>
<li><a href="https://github.com/javier-lopez/learn/blob/master/patches/rsync-3.1.1-trusty-detect-renamed-lax.diff">https://github.com/javier-lopez/learn/blob/master/patches/rsync-3.1.1-trusty-detect-renamed-lax.diff</a></li>
</ul>
installing debian build dependencies the smart way2014-04-16T00:00:00+00:00http://javier.io/blog/en/2014/04/16/installing-debian-build-dependencies-the-smart-way<h2 id="installing-debian-build-dependencies-the-smart-way">installing debian build dependencies the smart way</h2>
<h6 id="16-apr-2014">16 Apr 2014</h6>
<h3 id="the-problem">The Problem</h3>
<p>So you’re trying to build a Debian package from an upstream source tree, but you’re not sure what build dependencies you should install? I have this problem all the time. For example, if I wanted to build the unity source tree into a debian package, I’d branch it:</p>
<pre class="sh_sh">
$ bzr branch lp:unity
</pre>
<p>…change into the directory:</p>
<pre class="sh_sh">
$ cd unity
</pre>
<p>…and then try and build it. Needless to say, I almost never have the required build dependencies installed. You can try and use apt-get to install the build dependencies for you:</p>
<pre class="sh_sh">
$ sudo apt-get build-dep unity
</pre>
<p>But that reads the build dependencies from whatever version is present in the distribution, not the dependencies in the source tree.</p>
<h3 id="the-solution">The Solution</h3>
<p>The solution is easy. First, install a couple of packages:</p>
<pre class="sh_sh">
$ sudo apt-get install devscripts equivs
</pre>
<p>Then run this from within the source tree to install the build dependencies:</p>
<pre class="sh_sh">
$ sudo mk-build-deps -i
</pre>
<p>If, after this command completes you still cannot build the package, you should probably file a bug against the upstream project!</p>
<ul>
<li><a href="http://www.tech-foo.net/installing-build-dependencies-the-smart-way.html">http://www.tech-foo.net/installing-build-dependencies-the-smart-way.html</a></li>
</ul>
backups with rsync and rdiff-backup2014-04-08T00:00:00+00:00http://javier.io/blog/en/2014/04/08/backups-git-rsync-rdiff-backup<h2 id="backups-with-rsync-and-rdiff-backup">backups with rsync and rdiff-backup</h2>
<h6 id="08-apr-2014">08 Apr 2014</h6>
<p>I don’t remember the last time I lost information, that’s been mostly luck since I’m not really careful with my data. However with internet providers increasing bandwidth, efficient compression algorithms all around and affordable servers in the cloud I finally decided to give up my luck and automate my backup plan.</p>
<p>I’m fortunate to work in an homogeneous environment, Linux x32/x64 boxes, so I can cling to the lowest common denominator, on this case ssh/rsync. Both are installed (or available through default repositories) in virtually all Linux distributions and are secure, mature, efficient and well supported, there is a little issue with them though, they’ve too many options and can be tricky to remember.</p>
<p>So, with that in mind I grouped my favorite preferences and created a <a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/backup-remote-rsync">wrapper script</a>. That’s what I use to backup machines, it works like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ backup-remote-rsync -r b.javier.io #the program will backup $HOME to b.javier.io:~/hostname using default ssh keys
$ backup-remote-rsync -r b.javier.io -u admin -k /home/admin/.ssh/id_rsa /var/www /etc
#the program will backup /var/www and /etc to b.javier.io:~/hostname, while using admin's public ssh keys
</code></pre></div></div>
<p>The above lines can be added to a cronjob to remove human dependency:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo cronjob -l
#every day at 22:00
0 22 * * * backup-remote-rsync -r backup.javier.io -u admin -i /home/admin/.ssh/id_rsa /home/admin
</code></pre></div></div>
<p>It’s well known than rsync transfers files using a delta based mechanism, so once the initial backup is done further invocations are considerably faster, this is a great incentive to run backups often.</p>
<p>On the server side, I used <a href="http://www.nongnu.org/rdiff-backup/examples.html">rdiff-backup</a> to create dailies/weeklies/monthlies.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>0 1 * * * rdiff-backup /home/admin/backup/ /home/admin/recover/daily
0 2 * * * rdiff-backup --remove-older-than 6D /home/admin/recover/daily
0 1 * * 0 rdiff-backup /home/admin/backup/ /home/admin/recover/weekly
0 2 * * 0 rdiff-backup --remove-older-than 3W /home/admin/recover/weekly
0 1 1 * * rdiff-backup /home/admin/backup/ /home/admin/recover/monthly
0 2 1 * * rdiff-backup --remove-older-than 12M /home/admin/recover/monthly
</code></pre></div></div>
<p>And added <a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/share-backup">share-backup</a>, to provide fast and secure access to single files. Complete recoveries are available through standard rsync.</p>
<p><code class="language-plaintext highlighter-rouge">share-backup</code> usage example:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ssh admin@b.javier.io share-backup
Starting server ...
address : http://b.javier.io:7648
username: guest
password: M2U4ZDRj
ssl :
serving: /home/admin/recovery
Run: share-backup stop, to stop sharing
$ ssh admin@b.javier.io share-backup stop
Stopped
</code></pre></div></div>
<h2 id="decisions">Decisions</h2>
<p>While testing backup utilities and differente strategies I found me asking me priority questions, is it more important to save hard disk space or to keep several copies around?, does the backup plan require third party integrations or can it be generic?, how many resources in time and money am I willing to invest?, how fast the recovery process must be?, how private the data is?.</p>
<p>I tried to answer these questions as honestly as possible and found than the above procesure covers me in most situations, that doesn’t mean it will work for your. I urge you to test as many alternatives as possible and only stick with those that make you feel confortable and secure. If you’re out of ideas take a look at:</p>
<ul>
<li><a href="http://www.magiksys.net/ddumbfs/">ddumbfs</a></li>
<li><a href="https://btrfs.wiki.kernel.org/index.php/Main_Page">btrfs</a></li>
<li><a href="https://github.com/s3fs-fuse/s3fs-fuse">s3fs</a></li>
<li><a href="http://opendedup.org/">opendedup</a></li>
<li><a href="https://github.com/bup/bup">bup</a></li>
<li><a href="http://obnam.org/">obnam</a></li>
<li><a href="http://bacula.org/">bacula</a></li>
<li><a href="https://en.wikipedia.org/wiki/List_of_backup_software">etc</a>.</li>
</ul>
<p>That’s it, stay safe and backup your data now 😋!</p>
ffcast2014-03-21T00:00:00+00:00http://javier.io/blog/en/2014/03/21/ffcast<h2 id="ffcast">ffcast</h2>
<h6 id="21-mar-2014">21 Mar 2014</h6>
<p><strong><a href="/assets/img/ffcast.gif"><img src="/assets/img/ffcast.gif" alt="" /></a></strong></p>
<p>During the last days I’ve rewrote <a href="https://github.com/lolilolicon/FFcast2">ffcast</a> and package it (only for supported Ubuntu LTS versions) without an obvious reason. ffcast is a program to create screencasts. I’ve already written similar utilities but this one is better.</p>
<h3 id="installation">Installation</h3>
<pre class="sh_sh">
$ sudo apt-add-repository ppa:minos-archive/main
$ sudo apt-get update && sudo apt-get install ffcast
</pre>
<h3 id="usage">Usage</h3>
<pre class="sh_sh">
$ ffcast -s
</pre>
<p>The above command will record the selected area and will create a movie with a random name (of 8 characters) in $HOME. It’ll be a nice addition to my shortcuts 😋 Another one-liner I’ll probable use is:</p>
<pre class="sh_sh">
$ ffcast -vv -s ffmpeg -follow_mouse centered -r 25 -- -f alsa -i hw:0 -vcodec libx264 cast.mkv
</pre>
<p>It will make the screencast to follow my mouse. With ffcast is easy to create <strong>.gif</strong> movies as well:</p>
<pre class="sh_sh">
$ ffcast -s ffmpeg -r 15 -- -pix_fmt rgb24 out.gif
$ convert -layers Optimize out.gif out_opt.gif
</pre>
<p>References</p>
<ul>
<li><a href="https://github.com/lolilolicon/FFcast2">https://github.com/lolilolicon/FFcast2</a> (original version)</li>
<li><a href="https://github.com/javier-lopez/ffcast">https://github.com/javier-lopez/ffcast</a> (personal one)</li>
<li><a href="http://unix.stackexchange.com/questions/113695/gif-screencasting-the-unix-way">http://unix.stackexchange.com/questions/113695/gif-screencasting-the-unix-way</a></li>
</ul>
madre terra2014-02-26T00:00:00+00:00http://javier.io/blog/pt/2014/02/26/madre-terra<h2 id="madre-terra">madre terra</h2>
<h6 id="26-feb-2014">26 Feb 2014</h6>
<pre class="lyric">
Aqui está meu incenso perfumado
meu cacau cobiçado
minha fresca medicina
aqui está minha pena que se levanta
Hoje entrego-os
em homenagem por deixar-me
tocar a graça do seu corpo
Eu cresço em você pai
eu vivo em ti mãe
minhas mãos foram manchadas
feri você
O criador assim o quis
assim o senhor das alturas o ordenou
de você sairá minha comida e bebida
um pouco para mim
outro para ti
tu também tens sede
Aqui está meu tributo
aqui está minha gratidão
isto te dará forças
isto te dará vida
Juan Gregorio Regino / Madre terra (Nijma en nima, IX)
</pre>
<p></p>
howdoi, a code search tool and a sh implementation2014-02-25T00:00:00+00:00http://javier.io/blog/en/2014/02/25/howdoi-in-shell-scripting<h2 id="howdoi-a-code-search-tool-and-a-sh-implementation">howdoi, a code search tool and a sh implementation</h2>
<h6 id="25-feb-2014">25 Feb 2014</h6>
<p>During these days I read about <a href="https://github.com/gleitz/howdoi">howdoi</a>, a <a href="http://stackoverflow.com/">stackoverflow</a> client for your <a href="http://en.wikipedia.org/wiki/Command-line_interface">terminal</a>. And I though it was pretty cool, so I looked at a couple of implementation (the <a href="https://github.com/gleitz/howdoi">original</a> in python and a <a href="https://github.com/roylez/howdoi">clone</a> in ruby) and decided to do my own version, I’ve just learned awk and wanted something where I could use it. Besides, this version doesn’t require anything but awk and wget.</p>
<p>Get the code at:</p>
<ul>
<li><a href="https://raw.github.com/javier-lopez/learn/master/sh/tools/howdoi">https://raw.github.com/javier-lopez/learn/master/sh/tools/howdoi</a></li>
</ul>
<p><strong><a href="http://imgs.xkcd.com/comics/tar.png"><img src="http://imgs.xkcd.com/comics/tar.png" alt="" /></a></strong>
<!--<iframe class="showterm" src="http://showterm.io/ab7339312c9d960f09f77" width="640" height="350"> </iframe>--></p>
<p>Happy hacking ☺!</p>
setting up jekyll locally2014-02-22T00:00:00+00:00http://javier.io/blog/en/2014/02/22/github-page-build-failed<h2 id="setting-up-jekyll-locally">setting up jekyll locally</h2>
<h6 id="22-feb-2014">22 Feb 2014</h6>
<!--<iframe class="showterm" src="http://showterm.io/dd994deaf00a01fcb9c65" width="640" height="350"> </iframe> -->
<p>I ♡ <a href="https://github.com/">github</a>, it has never been easier to start working in open source projects =). One of their rock star services is <a href="http://pages.github.com/">github pages</a>, which allows people to setup static pages for their projects, it even provides some nice themes so your page doesn’t look awful. Many people however (mostly technical) use it to host their blogs (such as this one), you get great infrastructure, a subdomain (and the possibility of using your own domain), revisions (git) and markdown, all for free!, isn’t that freaking awesome!?</p>
<p>Github pages could be perfect, however they’re not (although they’re really close), sometimes when you’re using markdown and the translation markdown ⇨ html fails you’ll get a nice mail such as this one:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>The page build failed with the following error:
page build failed
For information on troubleshooting Jekyll see:
https://help.github.com/articles/using-jekyll-with-pages#troubleshooting
If you have any questions please contact GitHub Support.
</code></pre></div></div>
<p>Beautiful!, no signal of what went wrong =), ok, to be fair, github has recently started to <a href="https://github.com/blog/1706-descriptive-error-messages-for-failed-github-pages-builds">add more details</a>, however they’re still not sufficient, I still require to mirror their jekyll setup in order to see what’s really happening. Since I’ve done more than a couple of times, I thought it would be a good idea to automatize it.</p>
<pre class="sh_sh">
$ sh <(wget -qO- https://raw.githubusercontent.com/javier-lopez/learn/master/sh/is/gitpages)
...
$ git clone --depth=1 https://github.com/username/site.github.com
$ cd site.github.com
$ jekyll serve #fix errors till it works
</pre>
<p>It requires an Ubuntu ⇨ 12.04 system and sudo credentials. Additional gotchas:</p>
<ul>
<li><a href="http://dwdii.github.io/2013/08/28/GitHub-Pages-Jekyll-Ampersands.html">http://dwdii.github.io/2013/08/28/GitHub-Pages-Jekyll-Ampersands.html</a></li>
</ul>
<p>Happy blogging 😄!</p>
the most portable language in the world, awk2014-02-22T00:00:00+00:00http://javier.io/blog/en/2014/02/22/awk<h2 id="the-most-portable-language-in-the-world-awk">the most portable language in the world, awk</h2>
<h6 id="22-feb-2014">22 Feb 2014</h6>
<p><strong><a href="/assets/img/93.png"><img src="/assets/img/93.png" alt="" /></a></strong></p>
<p>The other day while I was browsing I found an article called “<a href="http://www.computerworld.com.au/article/216844/a-z_programming_languages_awk/">The awk origins</a>”, I liked so much than I decided to learn awk (it’s pronounced “auk”). I had already used one-liners but considered larger awk programs unfriendly and its syntax over complicated, however once I started diving on it and unbelief my fears I found it quite fun and easy to use, what a powerful tool based on minimal principles!</p>
<h2 id="awk-an-event-drive-language">Awk, an event drive language</h2>
<p>The most important thing in awk (and what took me more time to learn) was to understand that it’s an event drive language based in 5 important areas:</p>
<ul>
<li><strong>begin</strong></li>
<li><strong>body</strong>
<ul>
<li><strong>search</strong></li>
<li><strong>action</strong></li>
</ul>
</li>
<li><strong>end</strong></li>
</ul>
<p>This means that every awk program (even the smallest ones) have a begin, body and end sections. The begin and end sections are similar, they’re executed only once, at the beginning and at the end of the program, to write the classic “Hello World” a person can do it in both sections:</p>
<pre class="sh_sh">
$ awk 'BEGIN {print "Hello World"}' < /dev/null
Hello World
$ awk 'END {print "Hello World"}' < /dev/null
Hello World
</pre>
<p>Every section is defined by its name and its actions (which are defined between {}). Awk programs are written between (‘) so the shell doesn’t interpret any variable or keywords. Awk programs can also be written in files and be executed directly:</p>
<pre class="sh_sh">
$ cat hello.awk
#!/usr/bin/awk -f
BEGIN {print "Hello World"}
$ ./hello.awk
Hello World
</pre>
<p>In between, is the body section, the most powerful one, it defines search patterns and related actions.</p>
<pre class="sh_sh">
$ awk '/.*/ {print $0}' file
</pre>
<p>The above line is comparable to <strong>$(cat file)</strong>, the search pattern is <strong>/.*/</strong> (any character) and the action is <strong>{print $0}</strong> (print current line). The body section is executed once per line, if a file contain 10 lines, the body section will be executed 10 times and will print all the content. Any length of pattern-actions can be declared within an awk program. The next example will look for <strong>daemon</strong> and <strong>root</strong> and will print every line where awk finds those strings.</p>
<pre class="sh_sh">
$ awk '/root/ {print $0} /daemon/ {print $0}' /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
</pre>
<p>If no search pattern is defined for an action, the action will be executed once per line, if no action is defined for a search pattern the default action will be to print the current line, if no parameter is given to <strong>print</strong> it will print the whole line (<strong>$0</strong>). Therefore the above examples can be rewritten as follows:</p>
<pre class="sh_sh">
$ awk '{print $0}' file
$ awk '{print}' file
$ awk '/root/ || /daemon/ {print $0}' /etc/passwd
$ awk '/root/ || /daemon/ {print}' /etc/passwd
$ awk '/root/ || /daemon/' /etc/passwd
</pre>
<p>These alternatives ways of writing awk programs (I think) are part of the reason why awk seems like a cryptographic language and why so many awk programs are tiny. Awk also defines default variables, some of the most important are:</p>
<blockquote>
<p>NR = Number of Record (line number)</p>
</blockquote>
<blockquote>
<p>NF = Number of Field</p>
</blockquote>
<blockquote>
<p>RS = Record separator (\n by default)</p>
</blockquote>
<blockquote>
<p>FS = Field separator (white spaces by default)</p>
</blockquote>
<p>If you’ve a file with the content:</p>
<pre class="lyric">
1 2 3
4 5 6
</pre>
<p>Awk will see 2 records and 3 fields. So, <strong>$(cat -n file)</strong> can be emulated in awk with:</p>
<pre class="sh_sh">
$ awk '{print NR, $0}' file
</pre>
<p>As the search pattern is missing, the action will be executed once for every line, and for every time it will print <strong>NR</strong> plus the whole line, <strong>NR</strong> will increase +1 in every iteration, that’s a lot of things happening in a minuscule definition. Let’s review other example, <strong>$(wc -l)</strong>:</p>
<pre class="sh_sh">
$ awk 'END {print NR}' file
$ awk '{i++} END {print i}' file
</pre>
<p><strong>NR</strong> will always increment, so in the first program when the END sections gets executed it will print the total amount of lines in the file. The second example is easier to analyze, it doesn’t have a search pattern so the action (i++) will always be executed and at the end the program it will be printed. Its amazing how easily other Unix core utilities can be implemented in a simple line, let’s now copy <strong>$(head)</strong> behavior:</p>
<pre class="sh_sh">
$ awk 'NR <= 10' file
$ awk -v hl=10 'NR <= hl' file
</pre>
<p>Does it make sense?, awk is not as difficult as it seems 😉. Sed can also be emulated:</p>
<pre class="sh_sh">
$ awk '{gsub(/original/,"replace"); print}' file
$ awk 'function sed(search, replace) { gsub(search,replace); print } {sed("search","replace")}' file
</pre>
<p>Awk can also use control structures and functions, in the first example it uses de gsub function to replace all original string with “replace” in a file, just as sed would do it. In the second one, a function called “sed” is defined and used to replace the same strings. Awk is a complete turing language, so even though it could be seen as a toy tool, it’s a powerful tool which can be used to build sotisficated programs. Nevertheless if you’ve read till this phrase you already know its core principles and are ready to take advantage of its power.</p>
<p>I’m leaving some more example to get started, can you guess how do they work?</p>
<h2 id="awk-as-unix-swiss-army-knife">Awk as Unix swiss army knife</h2>
<pre class="sh_sh">
cat file ▷ awk '{print}' file
cat -n file ▷ awk '{print NR, $0}' file
cat -n file ▷ awk '{print FNR, $0}' file
head file ▷ awk 'NR <= 10' file
head -15 file ▷ awk -v hl=15 'NR <= hl' file
cut -d: -f1 /etc/passwd ▷ awk -F":" '{print $1}' /etc/passwd
cut -d: -f1 /etc/passwd ▷ awk 'BEGIN {FS=":"} {print $1}' /etc/passwd
wc -l file ▷ awk '{i++} END {print i}' file
wc -l file ▷ awk 'END {print NR}' file
wc -w file ▷ awk '{total = total + NF}; END {print total+0}' file
grep pattern file ▷ awk '/pattern/' file
grep -v pattern file ▷ awk '!/pattern/' file
sed 's/foo/bar/g' ▷ awk '{gsub(/foo/,"bar"); print $0}' file
tail file ▷ awk -v tl=10 '{a=a b $0;b=RS;if(NR<=tl)next;a=substr(a,index(a,RS)+1)}END{print a}' file
tail - 15 file ▷ awk -v tl=15 '{a=a b $0;b=RS;if(NR<=tl)next;a=substr(a,index(a,RS)+1)}END{print a}' file
tac file ▷ awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' file
uniq file ▷ awk 'a !~ $0; {a=$0}'
</pre>
<h2 id="awk-one-liners">Awk one-liners</h2>
<pre class="sh_sh">
awk '$2 ~ /pattern/' file #print line when second field matches pattern
awk '$2 !~ /^[0-9]+$/' file #print line when second field is not a number
awk '1; {print ""}' file #adds double space
awk 'BEGIN {ORS="\n\n"}; 1' file #adds double space
awk 'NF {print $0 "\n"}' file #adds double space to lines with content
awk 'BEGIN {RS="";ORS="\n\n"}/pattern/' #print whole paragrams where pattern is found
awk '{print $NF}' file #print the last field of every line
awk '{field=$NF} END{print field}' file #print the last field of the last line
awk 'NF > 4' file #print lines with more than 4 fields
awk '{sub(/^[ \t]+/, "");print}' file #delete white spaces at the beggining of a line
awk '{sub(/[ \t]+$/, "");print}' file #delete white spaces at the end of a line
awk '{gsub(/^[ \t]+|[ \t]+$/, "");print}' #delete white spaces at the beggining and end of a line
awk '{$2=""; print}' file #delete the 2nd field of every line
awk '/AAA|BBB|CCC/' file #search and print "AAA", "BBB" or "CCC"
awk '/AAA.*BBB.*CCC/' file #search and print "AAA", "BBB" and "CCC" in that order
</pre>
<p>References</p>
<ul>
<li><a href="http://awk.info">http://awk.info</a></li>
<li><a href="http://www.staff.science.uu.nl/~oostr102/docs/nawk/nawk_toc.html">http://www.staff.science.uu.nl/~oostr102/docs/nawk/nawk_toc.html</a></li>
<li><a href="http://www.grymoire.com/Unix/Awk.html">http://www.grymoire.com/Unix/Awk.html</a></li>
<li><a href="http://blog.bignerdranch.com/3799-a-crash-course-in-awk/">http://blog.bignerdranch.com/3799-a-crash-course-in-awk/</a></li>
<li><a href="http://f.javier.io/public/books/The_AWK_Programming_Language.pdf">The Awk Programming Language (1988) [pdf]</a></li>
</ul>
unstressed direct, indirect pronouns in Spanish2014-02-21T00:00:00+00:00http://javier.io/blog/en/2014/02/21/unstressed-object-pronouns-in-spanish<h2 id="unstressed-direct-indirect-pronouns-in-spanish">unstressed direct, indirect pronouns in Spanish</h2>
<h6 id="21-feb-2014">21 Feb 2014</h6>
<p><strong><a href="/assets/img/92.png"><img src="/assets/img/92.png" alt="" /></a></strong></p>
<p>Sometimes I help friends with Spanish grammar, I’m not an expert but I remember I wasn’t that bad on my Spanish classes neither. It’s also a good chance to practice my English, though. Anyway, last time I explained what were unstressed pronouns (pronómbres átonos) and I though it would be cool to start writing about it.</p>
<p>Unstressed object pronouns are used in Spanish to replace direct and indirect objects with pronouns, it’s one of the trickiest Spanish structures and even native speakers can misuse them, so they should be studied carefully. They can be translated sometimes as: me, you, him, her, it, us and them. However people should be aware that they don’t have exactly the same meaning.</p>
<p>To use correctly unstressed object pronouns, you should be able to recognize with ease when a phrase contain either a direct or indirect object. Let me wrap it up, A direct object is the part of the sentence which receives the action of the main verb, it can be discovered by asking what? and whom? to the verb.</p>
<h2 id="example">Example:</h2>
<pre>
Pepe eats fish / Pepe come pescado (what does Pepe eat?)
do(direct object) = fish = pescado
</pre>
<h2 id="exercises-find-the-direct-object-in-the-following-sentences">Exercises: Find the direct object in the following sentences</h2>
<blockquote>
<p>Juan Loves María / Juan ama a María</p>
</blockquote>
<blockquote>
<p>Carlos fixes computers / Carlos arregla computadoras</p>
</blockquote>
<blockquote>
<p>The policeman saw the thieves / El policia vio a los ladrones</p>
</blockquote>
<p>The indirect object in the other hand is whom receives the action of the direct object, and it can be discovered by asking, to whom?, for whom?.</p>
<h2 id="example-1">Example:</h2>
<pre>
Lucas buys a watermelon for Sandra / Lucas compra una sandía para Sandra
do (direct object) = a watermelon = una sandía
io(indirect object) = for Sandra = para Sandra
</pre>
<h2 id="exercises-find-the-direct-and-indirect-object-of-the-following-phrases">Exercises: Find the direct and indirect object of the following phrases</h2>
<blockquote>
<p>Margarita made coffee for Marcos / Margarita hizo café para Marcos</p>
</blockquote>
<blockquote>
<p>Alba gave some tips to her son / Alba dio unos consejos a su hijo</p>
</blockquote>
<blockquote>
<p>The told the workers to arrive earlier / El jefé dijó a los trabajadores que llegaran más temprano</p>
</blockquote>
<p>Once you’re able to identify the existence of an direct/indirect object in a phrase you’ll be able to use the appropriate pronoun. Take a look at the table in the header of this posts.</p>
<h2 id="example-2">Example:</h2>
<pre>
Pepe eats fish / Pepe come pescado
do=fish=pescado, pescado refers to an animal in third person singular (he, the fish)
</pre>
<p>It must be replaced with ‘lo’:</p>
<pre>
Pepe *lo* come (it should be placed by general rule before the main verb)
</pre>
<p>Let’s review other example:</p>
<pre>
Juan loves Maria / Juan ama a María
do=a Maria, Maria refers to a person in third person singular (she, Maria)
</pre>
<p>On this case, ‘a Maria’ must be replaced with ‘la’ (it’s female):</p>
<pre>
Juan *la* ama
</pre>
<h2 id="exercises-replace-the-direct-object-with-the-appropriate-pronoun">Exercises: Replace the direct object with the appropriate pronoun</h2>
<blockquote>
<p>Did you find your mother’s ring? / ¿Has encontrado el anillo de tu madre?</p>
</blockquote>
<blockquote>
<p>Roberto paid a lot of money for his new car / Roberto pagó mucho dinero por su nuevo carro</p>
</blockquote>
<blockquote>
<p>A friend will translate the book / Un amigo traducirá el libro</p>
</blockquote>
<p>The same technique can be used to replace indirect objects.</p>
<h2 id="example-3">Example:</h2>
<pre>
Lucas buys a watermelon for Sandra / Lucas compra una sandía para Sandra
od=a watermelon=una sandía
id=for Sandra=para Sandra
</pre>
<p>Sandra refers to a third person in singular (she, Sandra) and it can be replaced with ‘le’ because of being an indirect object. If it were a direct object it would be replaced with ‘la’.</p>
<pre>
Lucas *le* compra una sandía
</pre>
<h2 id="exercises-replace-the-indirect-object-with-the-appropriate-pronoun">Exercises: Replace the indirect object with the appropriate pronoun</h2>
<blockquote>
<p>Tomorrow I’ll give the money to Eduardo / Mañana entregaré el dinero a Eduardo</p>
</blockquote>
<blockquote>
<p>I bought a gift for you / Compré un regalo para ti</p>
</blockquote>
<blockquote>
<table>
<tbody>
<tr>
<td>The donations will be given back to the contributers this year</td>
<td>Los donativos de este año serán devueltos a los contribuyentes</td>
</tr>
</tbody>
</table>
</blockquote>
<p>Both can be replaced at the same time, the direct and the indirect objects, however when doing it, a new rule applies, whenever a ‘lo’ and ‘le’ gets together, the second one must be replaced with ‘se’ (indirect object).</p>
<h2 id="example-4">Example:</h2>
<pre>
Lucas buys a watermelon for Sandra / Lucas compra una sandía para Sandra
od=a watermelon=una sandía
id=for Sandra=para Sandra
Lucas la compra para Sandra (replacing the direct object)
Lucas le compra una sandía (replacing the indirect object)
Lucas se la compra (replacing both)
</pre>
<p>Pay attention to the <em>‘se’</em> which replaces <em>‘le’</em> for the indirect object</p>
<h2 id="exercises-replace-the-directindirect-objects-with-the-appropriate-pronouns">Exercises: Replace the direct/indirect objects with the appropriate pronouns</h2>
<blockquote>
<p>I told the girls the reasons of my decision / Les dije a las chicas las razones de mi decisión</p>
</blockquote>
<blockquote>
<p>Did you get back the chair to the neighbor (female)? / ¿Devolviste la silla a la vecina?</p>
</blockquote>
<blockquote>
<p>Diana wrote a letter to Hugo / Diana escribio una carta para Hugo</p>
</blockquote>
<p>That’s it</p>
<p><strong>keywords, palabras clave:</strong> <em>leismo, laismo, loismo, objeto directo, objeto indirecto</em></p>
<ul>
<li><a href="http://www.youtube.com/watch?v=N_3flr9ni0s">http://www.youtube.com/watch?v=N_3flr9ni0s</a> (direct object / objeto directo)</li>
<li><a href="http://www.youtube.com/watch?v=jMzAQ2bQEx0">http://www.youtube.com/watch?v=jMzAQ2bQEx0</a> (indirect object / objeto indirecto)</li>
<li><a href="http://www.appstate.edu/~fountainca/1050/unidad2/pronombresatonos.html">http://www.appstate.edu/~fountainca/1050/unidad2/pronombresatonos.html</a> (unstressed pronouns / pronombres átonos)</li>
</ul>
nossa infância2014-02-12T00:00:00+00:00http://javier.io/blog/pt/2014/02/12/nossa-infanca<h2 id="nossa-infância">nossa infância</h2>
<h6 id="12-feb-2014">12 Feb 2014</h6>
<pre class="lyric">
Nossa infância é só uma lembrança
demasiado vaga
longe ficou aquela nossa terra
a terra que nos viu crescer
Partiram nossos primeiros suspiros
sem nos dar tempo de madurar
ainda nos despertamos
e já somos homens
e já somos pais
sem nos dar tempo de olhar ao futuro
Não jogaremos mais
embora nossa mente siga brincando
já somos pais
o tempo não dá trégua para pensar
encontramo-nos, desmaia-nos
o tempo não dá trégua para curar
Nossos sonhos já não existem em céu nenhum
nossos primeiros suspiros partiram ontem
e já somos pais
não temos tempo de madurar, não temos
Juan Gregorio Regino / Nuestra infancia (Nga kamá xixií)
</pre>
<p></p>
por que não está vivendo seus sonhos?2014-02-05T00:00:00+00:00http://javier.io/blog/pt/2014/02/05/por-que-nao-vive-seus-sonhos<h2 id="por-que-não-está-vivendo-seus-sonhos">por que não está vivendo seus sonhos?</h2>
<h6 id="05-feb-2014">05 Feb 2014</h6>
<p>É uma pergunta interessante, pois poucas pessoas os tornam realidade.</p>
<p>Todos nós gostamos de sonhar, o fazemos na ducha, durante o trajeto no ônibus, enquanto esperamos na fila do supermercado, porém, a maioria desses sonhos contiuam sonhos. A vida é curta e cada ano o tempo passa depressa (é normal, nossos cérebros e nervos transmitem a informação <a href="http://deepblue.lib.umich.edu/bitstream/handle/2027.42/50152/880151007_ftp.pdf?sequence=1">mais lentamente</a> cada ano). Você tem menos tempo do que acha.</p>
<p>Não espere mais, pois mesmo se começar agora, você já consumiu quase todo o seu tempo. Levanta-se e torne seus sonhos realidade!</p>
introduction to drupal 7 installation profiles2014-01-26T00:00:00+00:00http://javier.io/blog/en/2014/01/26/introduction-to-drupal-7-installation-profiles<h2 id="introduction-to-drupal-7-installation-profiles">introduction to drupal 7 installation profiles</h2>
<h6 id="26-jan-2014">26 Jan 2014</h6>
<p><strong><a href="/assets/img/91.gif"><img src="/assets/img/91.gif" alt="" /></a></strong></p>
<p>Last weekend I worked on a Drupal site and it was not fun =( mostly due to incomplete and inaccurate documentation. My goal was to create a distributable bundle for <a href="https://github.com/javier-lopez/ubuntu-mx-www">Spanish Ubuntu</a> local teams. I already had a base theme and didn’t though it could be that hard, however what started as a simple task ended as a very long weekend. On this post I’ll try to document what issues I found and how they were worked out.</p>
<p>For the record, I must say I’m not a web developer, I find all the technologies related quite difficult to understand, I also didn’t have any previous experience with drupal internals.</p>
<p>Since the beginning I though any work should be automatized as much as possible, ideally an administrator would be able to download the bundle, throw it away on the file system, run the web installer and start creating content. The result happily is quite similar:</p>
<ul>
<li>Download and place the bundle in a directory read by a web server (eg, apache)</li>
<li>Go to the server ip</li>
<li>Select the ‘Ubuntu-mx’ profile</li>
<li>On a Spanish translated interface, input the database settings</li>
<li>Configure the administrator credentials</li>
<li>Create content</li>
</ul>
<p>The resulting site would be translated to spanish, will enable a bunch of modules (l10n_update, admin_menu, smtp, ckeditor), an Ubuntu theme, search permissions and menu links.</p>
<p>For this to work, I used:</p>
<ul>
<li><a href="https://drupal.org/node/1022020">installation profiles</a></li>
<li><a href="https://drupal.org/project/features">feature modules</a></li>
</ul>
<h2 id="installation-profiles">Installation profiles</h2>
<p>Installation profiles are special modules who allow you to declare which modules, languages and themes to load by default, they can also be used to modify the installer to add or remove steps.</p>
<p>They must have a <a href="https://drupal.org/taxonomy/term/33388">.info</a>, <a href="https://api.drupal.org/api/drupal/modules!profile!profile.module/7">.profile</a> and <a href="https://api.drupal.org/api/drupal/includes!install.core.inc/function/install_tasks/7">.install</a> files at:</p>
<ul>
<li><strong>/profiles/profile_name/</strong></li>
</ul>
<p>I named my module <strong>‘umx’</strong></p>
<h3 id="info">.info</h3>
<p>This file contains name, description, drupal version and dependencies (modules enabled by default):</p>
<p><strong>declaring only core modules as dependencies</strong></p>
<p>The content of my <strong>/profiles/umx/umx.info</strong> file was:</p>
<pre class="sh_properties">
name = Ubuntu-mx
description = Install Ubuntu-mx modules, language (es) and configuration
version = 0.1
core = 7.x
dependencies[] = block
dependencies[] = color
dependencies[] = comment
dependencies[] = contextual
dependencies[] = dashboard
dependencies[] = help
dependencies[] = image
dependencies[] = list
dependencies[] = number
dependencies[] = options
dependencies[] = path
dependencies[] = taxonomy
dependencies[] = dblog
dependencies[] = shortcut
dependencies[] = overlay
dependencies[] = field_ui
dependencies[] = file
dependencies[] = rdf
dependencies[] = umx_conf
</pre>
<p>I found much more comfortable to declare only core modules as dependencies and carry on the rest of the configuration in a separate module (generated by the features module). This way I can keep using the same unmodified profile and alter the resulting site by regenerating the umx_conf module when necessary.</p>
<h3 id="profile">.profile</h3>
<p>On the <strong>.profile</strong> and <strong>.install</strong> files can be written functions to override/define hooks, I used <strong>/profile/umx/umx.profile</strong> to configure the default language to spanish (as a result the language dialogue is skipped):</p>
<pre class="sh_php">
function umx_profile_details() {
$details['language'] = "es";
return $details;
}
</pre>
<p>The default language can also be configured in a feature module (on this example, <strong>umx_conf</strong>) however if doing so, the installation process itself will run in English, and if it’s double declared (here and in a feature module), the installation will return a sql duplication key error.</p>
<p>It’s not optimal, but it’s the best I could do. I located the es.po file at:</p>
<ul>
<li><strong>/profiles/umx/translations/es.po</strong></li>
</ul>
<p>The localize.drupal.org server has po files with a drupal version prefix, if downloaded from there, you should rename them. Eg</p>
<pre>
drupal-7.26.es.po -> es.po
</pre>
<ul>
<li><a href="https://drupal.org/node/1326106">https://drupal.org/node/1326106</a></li>
</ul>
<h3 id="install">.install</h3>
<p>Here you can override install, update and uninstall functions. Since the feature modules doesn’t support theme settings (something who could be really useful) I declared the default theme on this file (<strong>/profile/umx/umx.install</strong>):</p>
<pre class="sh_php">
function umx_install() {
db_update('system')
->fields(array('status' => 1))
->condition('type', 'theme')
->condition('name', 'umxtheme')
->execute();
variable_set('theme_default', 'umxtheme');
}
</pre>
<p>According to the documentation, theme settings should be declared in <strong>profile_themes_enabled()</strong> with the <strong>theme_enable()</strong> function, however I was unable to make to work any of them.</p>
<p>Snippets which did <strong>NOT</strong> work (drupal 7.26):</p>
<pre class="sh_php">
function umx_themes_enabled() {
//any code;
}
</pre>
<pre class="sh_php">
function umx_install_finished() {
//any code;
}
</pre>
<pre class="sh_php">
function umx_update_N() {
//any code;
}
</pre>
<pre class="sh_php">
function umx_install() {
variable_set('theme_default','umxtheme');
}
</pre>
<pre class="sh_php">
function umx_install() {
$list_themes = list_themes(TRUE);
$major_version = (int)VERSION;
$theme_default = isset($list_themes['umxtheme']) ? 'umxtheme' : 'garland';
$admin_theme = isset($list_themes['seven']) ? 'seven' : 'garland';
variable_set('theme_default', $theme_default);
theme_enable($theme_default);
theme_disable(array('bartik'));
if($affect_admin_theme){
variable_set('admin_theme', $admin_theme);
}
if (module_exists('switchtheme')) {
if (empty($_GET['theme']) || $_GET['theme'] !== $theme_default) {
$query = array(
'theme' => $theme_default
);
if($major_version < 7){
$options = $query;
}
else{
$options = array('query' => $query);
}
drupal_goto($_GET['q'], $options);
}
}
}
</pre>
<pre class="sh_php">
function umx_install() {
$enable = array(
'theme_default' => 'umxtheme',
'admin_theme' => 'seven',
);
theme_enable($enable);
foreach ($enable as $var => $theme) {
if (!is_numeric($var)) {
variable_set($var, $theme);
}
}
theme_disable(array('bartik'));
}
</pre>
<p>Drupal themes must be placed at:</p>
<ul>
<li><strong>/sites/all/themes</strong></li>
</ul>
<p>One lesson I learned was that themes should be named clearly, at the beginning I declared the theme as ‘umx’ (for ubuntu-mx theme) and put it in <strong>/sites/all/themes/umx-theme/</strong>, later on while trying to configure the bundle with a default theme I got confused because the profile installation was named equal and I was not sure which one I was referring to. Name your themes with a unique string, (I finally decided to rename ‘umx’ to ‘umxtheme’).</p>
<p>Ignore the <strong>name</strong> field written in the <em>.info</em> file, and prefix your hook functions with the selected drupal theme (e,g, umxtheme_footer_text())</p>
<ul>
<li><a href="http://www.computerminds.co.uk/articles/setting-default-theme-during-installation">Setting default theme for Drupal</a></li>
<li><a href="http://codedrup.blogspot.mx/2012/10/setting-different-default-theme-by.html">Setting a different default theme by default in an install profile</a></li>
<li><a href="http://www.isaacsukin.com/news/2011/01/10/how-write-drupal-7-installation-profile">how to write a drupal 7 installation profile</a></li>
</ul>
<h2 id="feature-modules">Feature modules</h2>
<p>Feature modules are good for bundling configurations, permissions and dependencies on non default modules. I created <strong>umx_conf</strong> to iterate faster. Using feature modules you can modify drupal settings through a browser and then export the result to code from the feature menu.</p>
<p>Autogenerated features modules can contain:</p>
<ul>
<li><em>.info</em></li>
<li><em>.module</em></li>
<li><em>.feature.submodule.inc</em></li>
</ul>
<p>Very few times you would need to modify them directly. In my experience (a weekend) the autogenerated files will sometimes contain mistakes, therefore I modified them slightly to feet my needs:</p>
<h3 id="info-1">.info</h3>
<p>On this file, dependencies are declared, the generator makes a quite clever job adding recursively all the dependencies your configurations depend on.</p>
<pre class="sh_properties">
name = umx conf
description = Common settings for the Ubuntu-mx local portal.
core = 7.x
package = Features
version = 7.x-0.1
project = umx_conf
dependencies[] = admin_menu
dependencies[] = admin_menu_toolbar
dependencies[] = ckeditor
dependencies[] = features
dependencies[] = l10n_update
dependencies[] = locale
dependencies[] = menu
dependencies[] = search
dependencies[] = smtp
features[ckeditor_profile][] = Advanced
features[ckeditor_profile][] = CKEditor Global Profile
features[ckeditor_profile][] = Full
features[features_api][] = api:2
features[user_permission][] = search content
features[user_permission][] = use advanced search
features[menu_custom][] = main-menu
features[menu_links][] = main-menu_portada:<front>
features[menu_links][] = main-menu_foros:http://google.com
features[menu_links][] = main-menu_preguntas:http://ubuntu.shapado.com
features[menu_links][] = main-menu_wiki:https://wiki.ubuntu.com/UbuntuMxTeam
features[menu_links][] = main-menu_chat:http://google.com
features[menu_links][] = main-menu_descargar-ubuntu:http://google.com
</pre>
<h3 id="module">.module</h3>
<p>On this file, you can override hook functions, I leave it blank</p>
<h3 id="featuresubmoduleinc">.feature.submodule.inc</h3>
<p>On these files per module configurations are saved, in my case, I had to modify <strong>/sites/all/modules/umx_conf/umx_conf.features.menu_links.inc</strong> because some links weren’t exported.</p>
<pre class="sh_php">
function umx_conf_menu_default_menu_links() {
$menu_links = array();
$menu_links['main-menu_portada:<front>'] = array(
'menu_name' => 'main-menu',
'link_path' => '<front>',
'router_path' => '',
'link_title' => 'Portada',
'options' => array(
'attributes' => array(
'title' => '',
),
'identifier' => 'main-menu_portada:<front>',
),
'module' => 'menu',
'hidden' => 0,
'external' => 1,
'has_children' => 0,
'expanded' => 0,
'weight' => 0,
'customized' => 1,
);
...
</pre>
<p>I had to adjust mostly the <strong>weight</strong> parameter and add missing links. Not difficult if you follow the syntax (even if you don’t know php).</p>
<ul>
<li><a href="http://www.youtube.com/watch?v=DxRBEaD9JCA">Introduction to drupal module features</a></li>
</ul>
<h2 id="extra">Extra</h2>
<p>If you had problems with the above description, a probably better and more up-to-date approach to profile Drupal installation is described at:</p>
<ul>
<li><a href="http://salsadigital.com.au/news/drupal-installation-profile-and-distributions">http://salsadigital.com.au/news/drupal-installation-profile-and-distributions</a></li>
</ul>
<p>That’s it, I hope this information can save some time to someone, have fun!</p>
multicursor in ubuntu2014-01-06T00:00:00+00:00http://javier.io/blog/en/2014/01/06/multicursor-in-ubuntu<h2 id="multicursor-in-ubuntu">multicursor in ubuntu</h2>
<h6 id="06-jan-2014">06 Jan 2014</h6>
<p><strong><a href="/assets/img/88.png"><img src="/assets/img/88.png" alt="" /></a></strong></p>
<p>During my last holidays I found myself into a position where I had to share my laptop with other persons. I knew it was possible to use different keyboards/mice with Linux but never had tried.., till now 😏</p>
<p>On this scenario, I had an extra monitor and an extra mouse, so the first thing I did was to enable the monitor, since I use <a href="http://i3wm.org/">i3</a> as my window manager I use raw xrandr to extend my visual setup.</p>
<pre>
$ xrandr --output VGA1 --mode 1680x1050 --right-of LVDS1
</pre>
<p>Pretty simple, I just love this kind of tools. Next item, enable mouse. For this device to work I used <a href="http://cgit.freedesktop.org/xorg/app/xinput/">xinput</a>.</p>
<pre>
$ xinput create-master Auxiliary
$ xinput list #get the mouse id
$ xinput reattach 10 "Auxiliary pointer" #use the id to set it as auxiliar pointer
</pre>
<p>After applying these changes, the xinput configuration looked like this:</p>
<pre>
xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Microsoft Microsoft® Nano Transceiver v1.0 id=11 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=14 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=15 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Power Button id=8 [slave keyboard (3)]
↳ Microsoft Microsoft® Nano Transceiver v1.0 id=9 [slave keyboard (3)]
↳ Integrated Camera id=12 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=13 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=16 [slave keyboard (3)]
⎡ Auxiliary pointer id=17 [master pointer (18)]
⎜ ↳ Microsoft Microsoft® Nano Transceiver v1.0 id=10 [slave pointer (17)]
⎜ ↳ Auxiliary XTEST pointer id=19 [slave pointer (17)]
⎣ Auxiliary keyboard id=18 [master keyboard (17)]
↳ Auxiliary XTEST keyboard id=20 [slave keyboard (18)]
</pre>
<p>That’s it, the experience wasn’t really bad, i3 reacts correctly most of the time and although there were confusion, it is manageable 😊</p>
<ul>
<li><a href="https://wiki.archlinux.org/index.php/Multi-pointer_X">https://wiki.archlinux.org/index.php/Multi-pointer_X</a></li>
</ul>
ssh captcha2013-12-17T00:00:00+00:00http://javier.io/blog/en/2013/12/17/ssh-captcha<h2 id="ssh-captcha">ssh captcha</h2>
<h6 id="17-dec-2013">17 Dec 2013</h6>
<p><strong><a href="https://github.com/javier-lopez/pam_captcha"><img src="/assets/img/pam_captcha.png" alt="" /></a></strong>
<!--<iframe class="showterm" src="http://showterm.io/53a85bc1b41c096c83130" width="640" height="350"> </iframe>--></p>
<p>Some days ago while I was reviewing some data I noticed a spammer in one of my remote machines. Since I was mostly using the box for running experiments I decided to rebuild it. Upon completion, I decided to improve my default ssh settings. I just liked too much to use a single password for all my ssh needs 😞</p>
<p>I know some ways to improve security, I could change the password to a really difficult one, change the default port, filter by ip, by tries (fail2ban), disable completely password login and allow only key based logins, etc.</p>
<p>At the end however I decided to just add a captcha protection, why?, most of the ssh attacks are automatized, people run scripts who tests thousand of passwords and run certain commands on success, these scritps won’t be able to recognized the slighly modification in the login process (they’re really dumb). In the other hand, I don’t need over complicated solutions, or more systems to administer. Ssh key based login is great but sometimes I just need access from third party machines.</p>
<p>Lastly some other popular solutions have come up but for one or other reason I couldn’t feel comfortable with them:</p>
<ul>
<li><a href="https://code.google.com/p/google-authenticator/">google authenticator</a> (my cellphone is most of the time lost, turn off or without battery, do I live under a rock?, not at all!, but I don’t get the always online hype.)</li>
<li><a href="http://barada.sourceforge.net/">barada</a> (see above reason)</li>
<li><a href="https://www.cl.cam.ac.uk/~mgk25/otpw.html">otpw</a> (printing and carrying passwords with me?, you must be kidding)</li>
<li><a href="http://ubuntuforums.org/showthread.php?t=1891356">otp</a> (I may try this one)</li>
<li><a href="https://github.com/chrishunt/github-auth">github auth</a> (unrelated but it’s a nice way to do pair programming fast)</li>
<li><a href="http://blog.authy.com/two-factor-ssh-in-thirty-seconds">authy</a> (people seems to really like cellphones)</li>
<li>any other method who involves <a href="https://www.duosecurity.com/">ForceCommand</a></li>
</ul>
<h3 id="installation">Installation</h3>
<pre>
$ sudo add-apt-repository ppa:minos-archive/main
$ sudo apt-get update && sudo apt-get install libpam-captcha
</pre>
<p>Be aware than the previous steps will only work in supported Ubuntu LTS versions.</p>
<h3 id="extra">Extra</h3>
<h2 id="sentry">Sentry</h2>
<p>If additional security is desired consider using <a href="https://www.tnpi.net/wiki/Sentry">sentry</a> over fail2ban, denyhosts, sshblacklist, etc, really.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ wget http://www.tnpi.net/internet/sentry.pl
$ sudo perl sentry.pl
$ echo "sshd : /var/db/sentry/hosts.deny : deny" > hosts
$ echo "sshd : ALL : spawn /var/db/sentry/sentry.pl -c --ip=%a : allowsendmail: all" >> hosts
$ cat hosts /etc/hosts.allow > hosts.allow
$ sudo mv hosts.allow /etc/ && rm hosts
</code></pre></div></div>
<h2 id="fail2ban">Fail2ban</h2>
<p>The next fail2ban regex will match the ssh captcha generated messages</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#/etc/fail2ban/filter.d/sshd.conf
^%(__prefix_line)s(?:error: PAM: )?Permission denied for .* from <HOST>$
</code></pre></div></div>
<p>Thanks Jordan! 😊</p>
<ul>
<li><a href="http://www.semicomplete.com/projects/pam_captcha/">http://www.semicomplete.com/projects/pam_captcha/</a></li>
<li><a href="https://github.com/minos-org/libpam-captcha">https://github.com/minos-org/libpam-captcha</a></li>
<li><a href="https://github.com/minos-org/libpam-captcha-deb">https://github.com/minos-org/libpam-captcha-deb</a></li>
</ul>
simple pxe setup2013-11-19T00:00:00+00:00http://javier.io/blog/en/2013/11/19/simple-pxe-setup<h2 id="simple-pxe-setup">simple pxe setup</h2>
<h6 id="19-nov-2013">19 Nov 2013</h6>
<p>There are several ways to setup a <a href="http://es.wikipedia.org/wiki/Preboot_Execution_Environment">pxe</a> (which are useful mostly for massive installations), this is my personal method. A preboot execution environment in 68KB with batteries included, pxelinux, dhcpd, tftp, and hands-free installation.</p>
<!--<iframe class="showterm" src="http://showterm.io/f2ac25e4df1e7ad5e989a" width="640" height="300"> </iframe>-->
<!--**[![](/assets/img/87.jpg)](/assets/img/87.jpg)**-->
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sh <(wget -qO- https://raw.githubusercontent.com/javier-lopez/learn/master/sh/tools/pxe)
[+] setting pxe environment in ./pxe_setup ...
- creating ./pxe_setup/menu.c32 ...
- creating ./pxe_setup/pxelinux.0 ...
- creating ./pxe_setup/simple-dhcpd ...
- creating ./pxe_setup/simple-tftpd ...
- creating ./pxe_setup/pxelinux.cfg/default ...
- creating ./pxe_setup/ubuntu/ubuntu.menu ...
- creating ./pxe_setup/pxe/fedora/fedora.menu ...
- creating ./pxe_setup/tools/tools.menu ...
</code></pre></div></div>
<p>The above command is the heart of the system, a script who creates a directory structure with all the tools and menus required to boot at least ubuntu/fedora (it can personalized to boot other distros). After executing the script, you’ll need to download two extra files; an initrd installer and a linux kernel.</p>
<p>As an example I’ll download the Ubuntu 12.04 amd64 corresponding files:</p>
<ul>
<li><a href="http://archive.ubuntu.com/ubuntu/dists/precise-updates/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/linux">linux</a></li>
<li><a href="http://archive.ubuntu.com/ubuntu/dists/precise-updates/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/initrd.gz">initrd.gz</a></li>
</ul>
<pre class="sh_sh">
$ wget http://archive.ubuntu.com/.../amd64/initrd.gz -O pxe_setup/ubuntu/1204/amd64/initrd.gz
$ wget http://archive.ubuntu.com/.../amd64/linux -O pxe_setup/ubuntu/1204/amd64/linux
</pre>
<h2 id="pxe-enabled-router">Pxe enabled router</h2>
<p>Some routers can forward pxe petitions, you can configure them to point all pxe request to the machine running the tftp server (who will provide the pxelinux.0 and other required files) to boot other systems. In this scenarios the ip assigned to the host where this setup was done needs to be entered in the router with the <strong>pxelinux.0</strong> string as path.</p>
<p>And start the tftp daemon in the source machine:</p>
<pre class="sh_sh">
$ cd pxe_setup && sudo python ./simple-tftpd
</pre>
<h2 id="computer-with-at-least-2-network-interfaces">Computer with at least 2 network interfaces</h2>
<p>If the router cannot forward pxe requests or you don’t have the permissions to do so, you can run a local dhcpd and connect to the target machines through a second network interface (the first one will be used to connect to Internet to download the installation files).</p>
<p>Let’s imagine wlan0 and eth0 are the wireless and wired interfaces of a laptop, the first one is connected to Internet and the second one to other machines through a switch/router or directly. The first step on this scenario is to allow wlan0 to act as a bridge between the target computers and Internet:</p>
<pre class="sh_sh">
$ sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
$ echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
</pre>
<p>And assign a local ip to the wired interface (eth0):</p>
<pre class="sh_sh">
$ while :; do sudo ifconfig eth0 10.99.88.1; sleep 3; done
</pre>
<p>NOTE: In systems governed by <a href="https://wiki.gnome.org/Projects/NetworkManager">NetworkManager</a> it’s better to use its infrastructure or disable it completely before running the above command.</p>
<p>Finally, the dhcp and tftp daemons can be launched:</p>
<pre class="sh_sh">
$ cd pxe_setup && sudo python ./simple-dhcpd -i eth0 -a 10.99.88.1
$ cd pxe_setup && sudo python ./simple-tftpd
</pre>
<p>Upon booting the target machines will print a menu asking for a which system to install (ubuntu or fedora), sweet n_n/</p>
<h2 id="extra-hands-free">extra, hands-free</h2>
<p>Most popular distributions support completely automated installations through preseed, kickstart, etc. This setup is no exception, it’s been configured to provide a hands-free installation for Ubuntu. The preseed file used can be retrieved at:</p>
<ul>
<li><a href="http://people.ubuntu.com/~javier-lopez/conf/preseed/minimal.preseed">http://people.ubuntu.com/~javier-lopez/conf/preseed/minimal.preseed</a></li>
</ul>
<p>It supports two extra boot parameters:</p>
<ul>
<li><strong>proxy=http://url</strong>, for using a proxy who doesn’t break the installation process</li>
<li><strong>user=joe</strong>, for setting a default user (admin by default)</li>
</ul>
<h2 id="uninstallation">uninstallation</h2>
<p>When the installation process ends, the pxe environmente can be easily removed with:</p>
<pre class="sh_sh">
$ rm -rf pxe_setup
</pre>
<p>Simple! 😏</p>
<p><strong>Idea</strong>: Create a vm with 2 network interfaces, the first one in <em>bridge</em> mode assigned to wlan0, and the second one in <em>bridge</em> / <em>internal network</em> assigned to eth0, configure this setup and take an snapshot for an instant pxe installer experience.</p>
<p>References:</p>
<ul>
<li><a href="https://github.com/psychomario/PyPXE">https://github.com/psychomario/PyPXE</a></li>
<li><a href="http://javier.io/blog/es/2010/12/14/compartir-conexion-pc-a-pc.html">http://javier.io/blog/es/2010/12/14/compartir-conexion-pc-a-pc.html</a></li>
</ul>
shundle2013-11-15T00:00:00+00:00http://javier.io/blog/en/2013/11/15/shundle<h2 id="shundle">shundle</h2>
<h6 id="15-nov-2013">15 Nov 2013</h6>
<!--<iframe src="http://showterm.io/64f9418e5bc5320d39d40" width="640" height="350" style="display:block; margin: 0 auto;"> </iframe> -->
<!--<iframe class="showterm" src="http://showterm.io/260fe8f71ef23ccf3fd9e" width="640" height="350"> </iframe> -->
<p><a href="https://github.com/javier-lopez/shundle">Shundle</a> is a general sh plugin manager I wrote when I realized how messy my ~/.bashrc was getting. It also helped to learn more about how to write portable sh code. It’s not intend to be used by everyone, actually it could scare a lot of people =)</p>
<p>However if you feel brave enough to test it, go ahead, it’s free software!</p>
<p><strong><a href="/assets/img/shundle-2.gif"><img src="/assets/img/shundle-2.gif" alt="" /></a></strong></p>
<p>I’ve created few plugins around it; <a href="https://github.com/javier-lopez/shundle-plugins/tree/master/colorize">colorize</a>, <a href="https://github.com/javier-lopez/shundle-plugins/tree/master/aliazator">aliazator</a>, <a href="https://github.com/javier-lopez/shundle-plugins/tree/master/eternalize">eternalize</a>, the idea is that shundle loads/unloads as many as the user wishes, righ now it adds 0m0.110 seconds with all the plugins enabled and 0m0.048s without any to the average bash startup time (working in getting more shells supported). Note: I tested it in a dual core cpu.</p>
<p>Plugins are enabled by placing a <strong>Bundle=</strong> directive in the shell profile file (~/.bashrc in bash), eg. enabling eternalize:</p>
<pre class="sh_sh">
Bundle='javier-lopez/shundle-plugins/aliazator.git'
</pre>
<p>After doing it, shundle requires to be downloaded:</p>
<pre class="sh_sh">
$ git clone --depth=1 https://github.com/javier-lopez/shundle ~/.shundle/bundle/shundle
</pre>
<p>Then, shundle will setup everything else, (a new tab will require to be open, or the shell profile requires to be sourced):</p>
<pre class="sh_sh">
$ . ~/.bashrc #source your .zshrc or the shell initialization file you use
$ shundle install
</pre>
<p>Shundle will install the desired plugins and after reloading the session (or openning another tab) a new theme with several commands will be available (aliazator, eternalize, etc). How is different to downloading scripts and placing them in /usr/local/bin or in $PATH?, well, the idea is that eventually only the shell profile file gets tracked to replicate an unique (cli) environment anywhere.</p>
public cloud services (digitalocean, aws) and vagrant2013-11-07T00:00:00+00:00http://javier.io/blog/en/2013/11/07/public-cloud-services-and-vagrant<h2 id="public-cloud-services-digitalocean-aws-and-vagrant">public cloud services (digitalocean, aws) and vagrant</h2>
<h6 id="07-nov-2013">07 Nov 2013</h6>
<p><strong><a href="/assets/img/86.png"><img src="/assets/img/86.png" alt="" /></a></strong>
<!--<iframe class="showterm" src="http://showterm.io/ce9681926ec6875d743f1" width="640" height="350"> </iframe>--></p>
<p>I like to keep a fast, ordered and stable computer, that’s why I use virtual machines, containers, public cloud services and other means to keep it that way, all my ram belongs to firefox 😅</p>
<p>The cloud is great, I can do more with less because they usually have more resources than my laptop and plenty of bandwidth 😍. My favorite elastic cloud is <a href="http://digitalocean.com/">DigitalOcean</a> ($5/month), sometime ago also tried <a href="http://aws.amazon.com/ec2/">Ec2</a> but its pricing scheme made me uncomfortable. Other than that, I also use <a href="http://lowendbox.com/">Low End Boxes</a> (LEB) when running long term tasks, it’s amazing how far you can go with a $20/year box.</p>
<p>So, getting back to the main topic, it turns out than through plugins, <a href="http://www.vagrantup.com/">vagrant</a> is able to launch and provision to remote machines, that’s what I’m using to interact with cloud instances.</p>
<pre class="sh_sh">
$ vagrant up --provider=digital_ocean
$ vagrant up --provider=aws
</pre>
<p>It’s not perfect, vagrant takes ages just to print a help screen, but I think I can manage to use it till I find something better, recommendations are welcome.</p>
<h2 id="vagrant">Vagrant</h2>
<p>Vagrant installation process is a breeze, it supports OSX, Windows and Linux, in some Linux distributions it’s even included in official repositories, but such versions are commonly out of date, that’s the case with Ubuntu, so it’s better to download Vagrant from its site.</p>
<ul>
<li><a href="http://downloads.vagrantup.com/">http://downloads.vagrantup.com/</a></li>
</ul>
<pre class="sh_sh">
$ sudo dpkg -i vagrant_version.deb
$ #vagrant will be installed in /opt/vagrant/
</pre>
<h3 id="vagrant-digitalocean">Vagrant-digitalocean</h3>
<p>Once vagrant is onboard, it can be used to download plugins.</p>
<pre class="sh_sh">
$ vagrant plugin install vagrant-digitalocean
</pre>
<h3 id="vagrant-aws">Vagrant-aws</h3>
<p>The vagrant-aws plugin is somekind troublesome, even when its documentation doesn’t <a href="https://github.com/mitchellh/vagrant-aws/issues/163">mention it</a>, it requires some dependencies:</p>
<pre class="sh_sh">
$ sudo apt-get install build-essential libxslt-dev libxml2-dev zlib1g-dev
</pre>
<p>The plugin installation isn’t that bad:</p>
<pre class="sh_sh">
$ vagrant plugin install vagrant-aws
</pre>
<p>To play well with aws, you’ll need to create a new <a href="https://github.com/mitchellh/vagrant-aws/issues/95">default security group</a> that allows inbound connections through port 22, it’s dump considering the plugin can deploy new instances but it doesn’t upload a valid security group afterwards. Don’t forget to upload your public ssh key too.</p>
<h3 id="vagrantfile">Vagrantfile</h3>
<p>Finally, the additional providers can be used in Vagrantfile files:</p>
<pre>
VAGRANT_API_VERSION = "2"
Vagrant.configure(VAGRANT_API_VERSION) do |config|
config.vm.provider :digital_ocean do |provider, override|
override.ssh.private_key_path = '~/.ssh/id_rsa'
override.ssh.username = 'admin'
override.vm.box = 'digital_ocean'
override.vm.box_url = "https://github.com/smdahlen/vagrant-digitalocean/raw/master/box/digital_ocean.box"
override.vm.provision "shell", inline: "su - #{override.ssh.username} -c \"sh <(wget -qO- javier.io/s)\""
provider.image = 'ubuntu-12-04-x64'
provider.region = 'nyc2'
#provider.size = '16gb'
#provider.size = '8gb'
#provider.size = '4gb'
#provider.size = '2gb'
#provider.size = '1gb'
provider.size = '512mb'
provider.private_networking = 'false'
provider.setup = 'true'
provider.token = 'ACCESS_KEY_SECRET'
provider.ca_path = '/etc/ssl/certs/ca-certificates.crt'
end
config.vm.provider :aws do |provider, override|
#depends on: 'build-essential libxslt-dev libxml2-dev zlib1g-dev' on ubuntu
#requires a custom security group to allow input connections to port 22
override.ssh.private_key_path = "~/.ssh/id_rsa"
override.ssh.username = "ubuntu"
override.vm.box = 'dummy'
override.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
override.vm.provision "shell", inline: "su - #{override.ssh.username} -c \"sh <(wget -qO- javier.io/s)\""
provider.access_key_id = "ACCESS_KEY_SECRET"
provider.secret_access_key = "ACCESS_KEY_SECRET"
provider.ami = "ami-a73264ce"
provider.instance_type = "t1.micro"
provider.keypair_name = "id_rsa"
end
end
# vi:ft=ruby:
</pre>
<p>And used with vagrant to launch empty remote boxes:</p>
<pre class="sh_sh">
$ vagrant up --provider=digital && vagrant ssh
$ vagrant up --provider=aws && vagrant ssh
</pre>
<p>Don’t forget to destroy the instances to avoid extra charges.</p>
<pre class="sh_sh">
$ vagrant destroy
</pre>
<p>That’s it, how do you launch remote environments?</p>
report bugs to debian from ubuntu2013-09-28T00:00:00+00:00http://javier.io/blog/en/2013/09/28/report-debian-bugs-within-ubuntu<h2 id="report-bugs-to-debian-from-ubuntu">report bugs to debian from ubuntu</h2>
<h6 id="28-sep-2013">28 Sep 2013</h6>
<!--**[![](/assets/img/79.png)](/assets/img/79.png)**-->
<p>I’m a normal computer user, I don’t run local mail servers and I don’t have the intention to do so, I have my email account with gmail and I review it using a web browser or <a href="http://www.mutt.org/">mutt</a> when I’m in my own computer. This can be troublesome if you pretend to report bugs to <a href="https://www.debian.org/">debian</a> using its <a href="https://www.debian.org/Bugs/">bug tracker system</a> an antique system based in emails.</p>
<p>To report a bug from within Ubuntu, people are suppose to type:</p>
<pre>
$ reportbug -B debian package
</pre>
<p>However it won’t work because it won’t find a local mail server (and it would let you know after you spend 15-20 min, smart programming 😒) After spending more precious time on Internet, you’ll find out it actually can be configured to use an external smtp server (I’ve read somewhere Debian devs are interested in upgrade BTS, and probably they will re-invent the wheel in the process because of course there are not enought bug trackers online)…</p>
<pre>
$ reportbug --configure
</pre>
<p>And you’ll need to input the following data:</p>
<ul>
<li>smtp.gmail.com:587</li>
<li>user@gmail.com</li>
<li>check tls</li>
</ul>
<p>After done it a new <strong>$HOME/.reportbugrc</strong> file will be created and the original command would work.</p>
<pre>
$ reportbug -B debian package
</pre>
<h3 id="extra">extra</h3>
<p>If your report contain patches, and after configuring <strong>reportbug</strong> it may be a good idea to use <strong>submittodebian</strong> instead:</p>
<ul>
<li>It adds ubuntu specific <a href="https://wiki.ubuntu.com/Debian/Usertagging">tags</a></li>
<li>Allow patch editing before sending the report (to remove Ubuntu changes)</li>
<li>Use internally reportbug to send messages</li>
</ul>
<p><strong>submittodebian</strong> only works when you’ve <em>.orig, .diff, .changes</em> files, these files are generated by <a href="http://man.he.net/man1/debuild">debuild</a>.</p>
<h3 id="examples">examples</h3>
<h4 id="apt-get-source-traditional">apt-get source (traditional)</h4>
<pre>
$ apt-get source xicc
$ cd cd xicc-0.2/
$ sed -i 's/colour/color/g' debian/control
$ dch -i 'debian/control: replaced "colour" with "color".'
$ debuild -S
$ submittodebian
</pre>
<h4 id="bzr-modern">bzr (modern)</h4>
<pre>
$ bzr branch lp:ubuntu/xicc
$ cd xicc
$ sed -i 's/colour/color/g' debian/control
$ dch -i 'debian/control: replaced "colour" with "color".'
$ bzr commit -m 'replaced "colour" with "color".'
$ bzr bd -- -S
$ submittodebian
</pre>
<p>References</p>
<ul>
<li><a href="http://www.debian.org/Bugs/Reporting">http://www.debian.org/Bugs/Reporting</a></li>
<li><a href="https://wiki.ubuntu.com/Debian/Bugs">https://wiki.ubuntu.com/Debian/Bugs</a></li>
</ul>
pbuilder tips2013-09-27T00:00:00+00:00http://javier.io/blog/en/2013/09/27/pbuilder-tips<h2 id="pbuilder-tips">pbuilder tips</h2>
<h6 id="27-sep-2013">27 Sep 2013</h6>
<p>I’ll write down some tips useful when dealting with pbuilder in Ubuntu, pbuilder is a builder for testing the creation of .deb packages from .dsc sources, however I often use it as a light replacement for full virtual machines.</p>
<h2 id="e-release-signed-by-unknown-key-key-id-8b48ad6246925553">E: Release signed by unknown key (key id 8B48AD6246925553)</h2>
<pre class="sh_sh">
I: Distribution is sid.
I: Building the build environment
I: running debootstrap
/usr/sbin/debootstrap
I: Retrieving Release
I: Retrieving Release.gpg
I: Checking Release signature
E: Release signed by unknown key (key id 8B48AD6246925553)
</pre>
<p>This messages indicates debootstrap has not been able to verify than <strong>8B48AD6246925553</strong> is a valid key, by default pbuilder in Ubuntu reads <strong>/usr/share/keyrings/ubuntu-archive-keyring.gpg</strong>. This key is defined at: <strong>/usr/share/pbuilder/pbuilderrc</strong>. It sounds logic than a Debian key is not valid in an Ubuntu setup, however sometimes it’s useful to test a package against Debian without installing a full Debian environment.</p>
<p>This problem can be solved by adding the Debian key to the Ubuntu keys:</p>
<pre class="sh_sh">
$ sudo gpg --no-default-keyring --keyring /usr/share/keyrings/ubuntu-archive-keyring.gpg --recv-keys 8B48AD6246925553
$ sudo DIST=sid ARCH=amd64 pbuilder create
</pre>
<p>Or adding it to other ring and use it temporally:</p>
<pre class="sh_sh">
$ gpg --no-default-keyring --keyring /etc/apt/trusted.gpg --recv-keys 8B48AD6246925553
$ tail $HOME/.pbuilderrc
DEBOOTSTRAPOPTS=(
'--variant=buildd'
'--keyring' '/etc/apt/trusted.gpg'
)
$ sudo DIST=sid ARCH=amd64 pbuilder create
</pre>
<p>If you don’t want to mess with <strong>~/.pbuilderrc</strong> the parameter can also be set from the prompt command:</p>
<pre class="sh_sh">
$ sudo DIST=sid ARCH=amd64 pbuilder create --debootstrapopts --keyring=/etc/apt/trusted.gpg
</pre>
<h2 id="run-x-apps">Run X apps</h2>
<p>Pbuilder is a nothing but a chroot + debian enchantments, you can run virtually anything, from audio/video, to cli/gui applications, etc. Running a X app is a two step process:</p>
<pre class="sh_sh">
$ xhost + #in the host environment
</pre>
<pre class="sh_sh">
[chroot] $ export DISPLAY=:0.0
[chroot] $ app
</pre>
<h2 id="run-i18n-apps">Run i18n apps</h2>
<p>Running apps in other languages requires to download extra language packages and modify the LC_ALL variable:</p>
<pre class="sh_sh">
[chroot] $ apt-get install language-pack-es #interchange -es for the 2 letters of your own lang
[chroot] $ LC_ALL=es_ES.utf-8 app
</pre>
<h2 id="run-multimedia-apps">Run multimedia apps</h2>
<p>To run multimedia applications besides enabling X, you’ll need to mount <strong>/proc</strong> and <strong>/dev</strong>:</p>
<pre class="sh_sh">
$ printf "%s\\n" 'BINDMOUNTS="${BINDMOUNTS} /dev /proc' >> ~/.pbuilderrc
$ pbuilder login
</pre>
openfile and samba trash support2013-09-23T00:00:00+00:00http://javier.io/blog/en/2013/09/23/openfiler-samba-trash<h2 id="openfile-and-samba-trash-support">openfile and samba trash support</h2>
<h6 id="23-sep-2013">23 Sep 2013</h6>
<p>Sometimes it can useful to have trash support in samba/cifs. Sadly it’s not straighforward to do in <a href="http://www.openfiler.com/">openfiler</a>.</p>
<p><strong>/opt/openfiler/var/www/includes/generate.inc</strong></p>
<p>In an average samba installation preferences are saved in <strong>/etc/samba/smb.conf</strong>, in openfiler however this and many other files are constantely rebuild, so this changes won’t last if you apply them there. Modify <strong>/opt/openfiler/var/www/includes/generate.inc</strong> instead:</p>
<p>1588 line:</p>
<pre>
/* enable trash support */
$ac_smb_fp->AddLine( "\n");
$ac_smb_fp->AddLine( " ; enable trash support");
$ac_smb_fp->AddLine( " vfs objects = audit recycle" );
$ac_smb_fp->AddLine( " recycle: repository = /path/.trash" );
$ac_smb_fp->AddLine( " recycle: keeptree = Yes" );
$ac_smb_fp->AddLine( " recycle: exclude = *.tmp, *.temp, *.log, *.ldb" );
$ac_smb_fp->AddLine( " recycle: exclude_dir = tmp " );
$ac_smb_fp->AddLine( " recycle: versions = Yes " );
$ac_smb_fp->AddLine( " recycle: noversions = *.docx|*.doc|*.xls|*xlsx|*.ppt|*.odt" );
$ac_smb_fp->AddLine( "\n");
</pre>
<p>It may be a good idea to delete oldest files every now and then:</p>
<pre>
0 6 * * * root find /path/.trash -type f -mtime +14 -delete > /dev/null
</pre>
<p>Happy trashing 😉</p>
active extensions in firefox nightly2013-07-31T00:00:00+00:00http://javier.io/blog/en/2013/07/31/activate-extensions-in-firefox-nightly<h2 id="active-extensions-in-firefox-nightly">active extensions in firefox nightly</h2>
<h6 id="31-jul-2013">31 Jul 2013</h6>
<p><strong><a href="/assets/img/78.jpg"><img src="/assets/img/78.jpg" alt="" /></a></strong></p>
<p>Firefox nightly is the firefox version who is compiled every night. By default it will avoid loading any extension, however this behaviour can be override, to do so add the variable <strong>extensions.checkCompatibility.nightly</strong> to its configuration:</p>
<ol>
<li>Open ‘about:config’</li>
<li>Write <strong>extensions.checkCompatibility.nightly</strong></li>
<li>Select ‘New -> Boolean’</li>
<li>Write <strong>extensions.checkCompatibility.nightly</strong> again in “New value”</li>
<li>Select ‘false’ as the predefined state</li>
<li>Reboot firefox</li>
</ol>
<p>From now on <strong>‘about:plugins’</strong> will be available as usual. This procedure is also available as a plugin in the <a href="https://addons.mozilla.org/en-US/firefox/addon/checkcompatibility/">Disable Compatibility Checks Add-on</a> it works in all Firefox releases, nightly and stable ones.</p>
<p>Happy browsing 😋</p>
<ul>
<li><a href="http://nightly.mozilla.org/">Firefox nightly</a></li>
<li><a href="http://kb.mozillazine.org/Extensions.checkCompatibility">Extensions.checkCompatibility</a></li>
</ul>
logstash + redis + elasticsearch + kibana32013-07-23T00:00:00+00:00http://javier.io/blog/en/2013/07/23/logstash-redis-elasticsearch-kibana<h2 id="logstash--redis--elasticsearch--kibana3">logstash + redis + elasticsearch + kibana3</h2>
<h6 id="23-jul-2013">23 Jul 2013</h6>
<p><strong><a href="/assets/img/76.jpg"><img src="/assets/img/76.jpg" alt="" /></a></strong></p>
<ul>
<li><a href="http://logstash.net/">logstash</a></li>
<li><a href="http://redis.io/">redis</a></li>
<li><a href="http://elasticsearch.org/">elasticsearch</a></li>
<li><a href="http://three.kibana.org/">kibana3</a></li>
<li><a href="http://caspian.dotconf.net/menu/Software/SendEmail/">sendemail</a></li>
</ul>
<p><a href="http://en.wikipedia.org/wiki/Unix_philosophy">Composition</a> applied to logging has been a great sucess lately, this week I’ve verified how easy is to use logstash and friends with 48 servers distributed in two datacenters, I’ve created a script to deploy all programs in a single node.</p>
<pre class="sh_sh">
$ bash <(wget -qO- https://raw.github.com/javier-lopez/learn/master/sh/is/log-stack)
</pre>
<p><strong><a href="/assets/img/77.jpg"><img src="/assets/img/77.jpg" alt="" /></a></strong></p>
<p>If you prefer using a node per service you’ll need to go your own way, it shouldn’t be too difficult.</p>
<p>##Extra, patterns</p>
<p>To send emails when a pattern is found, I used the grep and file logstash filters:</p>
<pre class="sh_sh">
$ sudo service logstash-shipper stop
$ sudo vi /home/logstash/shipper.conf
$ sudo service logstash-shipper start
</pre>
<p><strong>/home/logstash/shipper.conf</strong></p>
<pre>
filter {
grep {
type => "syslog"
match => ["@message","pattern"]
add_tag => "Alert_flood"
drop => false
}
output {
file {
type => "syslog"
tags => [ "Alert_flood" ]
message_format => "%{@message}"
path => "/tmp/logstash_alert"
}
</pre>
<p><strong>WARNING:</strong> shipper.conf doesn’t look exactly like this, these snippets must be integrated with your own files, copy and paste won’t work. If you’re not sure about the syntax, take a look at logstash <a href="http://logstash.net/docs/1.1.13/">documentation</a>.</p>
<p>So, after rebooting the service logstash will add an “Alert_flood” tag to all messages where the pattern is found and will copy these messages (besides sending them to redis) to <strong>/tmp/logstash_alert</strong>.</p>
<p>Finally I wrote a <a href="https://gist.github.com/javier-lopez/6066888">script</a> to send warning messages by email to the admins:</p>
<pre class="sh_sh">
$ sudo crontab -l
*/1 * * * * /usr/local/bin/check_alerts_logstash.sh
</pre>
<ul>
<li><a href="http://cleversoft.wordpress.com/2013/04/05/887/">http://cleversoft.wordpress.com/2013/04/05/887/</a></li>
</ul>
lib bash2013-06-18T00:00:00+00:00http://javier.io/blog/en/2013/06/18/bash-lib<h2 id="lib-bash">lib bash</h2>
<h6 id="18-jun-2013">18 Jun 2013</h6>
<!--**![](/assets/img/75.png)**-->
<p>I don’t consider myself a programmer but a sort of power user, I’m in love with the cli linux interface and everytime I can I automatizate repetive tasks. That’s how I ended writting <a href="https://github.com/javier-lopez/learn/tree/master/sh">60~ scripts</a>, after a while I noticed a pattern, I used to copy and paste some parts of other scripts to finish faster so I started to write functions and put them in a lib file. After a while it has been increasing and I thought it would be a good idea to share it with others.</p>
<ul>
<li><a href="https://github.com/javier-lopez/learn/blob/master/sh/lib">sh/lib</a></li>
</ul>
<p>If you can improve current functions or add new ones you are welcome (just branch and push back), be aware that current code may hurt your eyes, you’ve been warned.</p>
<iframe class="showterm" src="http://showterm.io/43162198175c203d5a8f6" width="640" height="300"> </iframe>
<p>Have fun 😉</p>
remote environments normalization2013-05-28T00:00:00+00:00http://javier.io/blog/en/2013/05/28/remote-environments-normalization<h2 id="remote-environments-normalization">remote environments normalization</h2>
<h6 id="28-may-2013">28 May 2013</h6>
<p>I access a fair amount of remote environments through ssh, when I do it most of the times I end copying little bits of configuration files to make them easier to use. I do it so often than I created script to do it for me.</p>
<pre class="sh_sh">
$ sh <(wget -qO- javier.io/s)
</pre>
<iframe class="showterm" src="http://showterm.io/3bfc94afe0f51e8d6411f" width="640" height="350"> </iframe>
<p>Some of my favorite changes are:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[+] Installation of: byobu, vim-nox, curl, command-not-found, libpam-captcha, shundle and htop
[+] Removal of services: sendemail, apache, bind, etc
[+] Vim configuration
[+] Wcd as a replacement to cd
[+] +60 scripts:
[+] pastebin, $ cat file | pastebin
[+] extract, $ extract file.suffix
[+] fu-search, $ fu-search grep
[+] rm_, $ rm .bashrc && rm -u .bashrc
[+] uimg, $ uimg image.png #img pastebin
[+] ...
</code></pre></div></div>
<p>By default the script will backup(.old) any file before override it. Now all my new pristine environments are equal 😊</p>
<ul>
<li><a href="https://github.com/javier-lopez/dotfiles/">dotfiles</a></li>
<li><a href="https://github.com/javier-lopez/shundle">shundle</a></li>
<li><a href="https://github.com/javier-lopez/learn/">utils</a></li>
</ul>
access localhost with pagekite2013-04-06T00:00:00+00:00http://javier.io/blog/en/2013/04/06/access-localhost-with-pagekite<h2 id="access-localhost-with-pagekite">access localhost with pagekite</h2>
<h6 id="06-apr-2013">06 Apr 2013</h6>
<p>I ♡ pagekite, it allows me to connect to my laptop from anywhere, literally, it doesn’t matter if my computer is behind a nat router or if my input traffic is blocked, as long as my computer can start connections to internet I’m covered. And I don’t need to redefine anything, it will work even if I’m changing constantly networks.</p>
<p><strong><a href="/assets/img/68.jpg"><img src="/assets/img/68.jpg" alt="" /></a></strong></p>
<p>Why do I need to access my computer under every possible scenario?, personal portability, I prefer to travel lightly, just give me a book and some headphones and I’m ready to go to the end of the world. And even though I’ve most of my stuff in servers some things somehow end in my personal laptop.</p>
<p>So, right now I connect to home by typing:</p>
<pre class="sh_sh">
$ ssh home.javier.io
</pre>
<p>It doesn’t matter where I’m, neither where my laptop is, it will just work 😂</p>
<p>If you’re interested in setting something similar up, go and create an account in <a href="http://pagekite.net">http://pagekite.net</a>, personal startup of <a href="http://bre.klaki.net/">Bjarni Einarsson</a>, icelandic hacker.</p>
<p>Once done, you’ll be able to run:</p>
<pre class="sh_sh">
$ curl -s https://pagekite.net/pk/ |sudo bash #for installing pagekite in 1 line
$ pagekite.py 80 yourname.pagekite.me
</pre>
<p>And now you’ll be running your own web server available to the world, isn’t great? Now let’s talk about more interesting stuff such us using your own domain and connecting to ssh or any other protocol.</p>
<p>This is what happens when someone uses pagekite:</p>
<pre class="sh_sh">
$ ssh home.javier.io
</pre>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> 192.168.1.x home.javier.io home.javier.pagekite.me 192.168.1.x
::::::::::: ::::::::::: :::::::::::: :::::::::::::::
| client | => | dns | => | pagekite | => | laptop |
::::::::::: ::::::::::: :::::::::::: :::::::::::::::
</code></pre></div></div>
<p>The computer where the client is launched will connect to a dns, which in return will send it to a pagekite subdomain and from there to a reverse connection between pagekite and your server. Alloying to exchange data between client and server(laptop behind nat) through pagekite servers.</p>
<h3 id="dns">Dns</h3>
<ul>
<li>Cname</li>
</ul>
<p>For this to work, home.javier.io must point to pagekite. This can be done with a CNAME entry, and it depends of your dns provider. In my case it looks like this (<a href="http://iwantmyname.com">http://iwantmyname.com</a>):</p>
<p><strong><a href="/assets/img/69.png"><img src="/assets/img/69.png" alt="" /></a></strong></p>
<ul>
<li><a href="http://pagekite.net">pagekite.me</a></li>
</ul>
<p>Upon registration, pagekite will give you a <strong>nick.pagekite.me</strong> subdomain for free where you can add <a href="https://pagekite.net/signup/?more=free">other subdomains</a> to get <strong>subdomain.nick.pagekite.me</strong></p>
<p><strong><a href="/assets/img/70.png"><img src="/assets/img/70.png" alt="" /></a></strong></p>
<ul>
<li>home.javier.io kite</li>
</ul>
<p>You need to <a href="https://pagekite.net/signup/?more=cname#cnameForm">register</a> the CNAME entry in pagekite as well:</p>
<p><strong><a href="/assets/img/71.png"><img src="/assets/img/71.png" alt="" /></a></strong></p>
<p>Now your <a href="https://pagekite.net/home/">home page</a> should look like this:</p>
<p><strong><a href="/assets/img/72.png"><img src="/assets/img/72.png" alt="" /></a></strong></p>
<h3 id="server">Server</h3>
<p>Now it’s time to configure pagekite in the target machine:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>###[ Current settings for pagekite.py v0.5.6a. ]#########
# ~/.pagekite.rc
## NOTE: This file may be rewritten/reordered by pagekite.py.
#
##[ Default kite and account details ]##
kitename = home.javier.io
kitesecret = KITESECRET_KEY
##[ Front-end settings: use pagekite.net defaults ]##
defaults
##[ Back-ends and local services ]##
service_on = http:@kitename : localhost:80 : @kitesecret
service_on = raw-22:@kitename : localhost:22 : @kitesecret
##[ Miscellaneous settings ]##
savefile = ~/.pagekite.rc
###[ End of pagekite.py configuration ]#########
END
</code></pre></div></div>
<p>And launch the service:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./pagekite.py
>>> Hello! This is pagekite v0.5.6a. [CTRL+C = Stop]
Connecting to front-end 69.164.211.158:443 ...
- Protocols: http http2 http3 https websocket irc finger httpfinger raw
- Protocols: minecraft
- Ports: 79 80 443 843 2222 3000 4545 5222 5223 5269 5670 6667 8000 8080
- Ports: 8081 9292 25565
- Raw ports: 22 virtual
Quota: You have 2559.74 MB, 29 days and 4 connections left.
Connecting to front-end 173.230.155.164:443 ...
~<> Flying localhost:22 as ssh://home.javier.io:22/ (HTTP proxied)
~<> Flying localhost:80 as https://home.javier.io/
<< pagekite.py [flying] Kites are flying and all is well.
</code></pre></div></div>
<h3 id="client">Client</h3>
<p>To connect to your laptop you can use any web browser or complete an extra step to be able to use ssh. The <strong>$HOME/.ssh/config</strong> file should be edited as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host home.javier.io
CheckHostIP no
ProxyCommand /bin/nc -X connect -x %h:443 %h %p
</code></pre></div></div>
<p><strong>WARNING:</strong> the nc command must be the openbsd version, in Ubuntu it’s called <strong>netcat-openbsd</strong></p>
<p>If everything is correct, you should now be able to login:</p>
<pre class="sh_sh">
$ ssh home.javier.io
admin@home.javier.io's password:
</pre>
<h3 id="own-server">Own Server</h3>
<p>Pagekite is free software, both backend(client) and frontend(server). The same <code class="language-plaintext highlighter-rouge">pagekite.py</code> can play both roles. This is a quick summary in case you prefer going your own.</p>
<pre class="sh_sh">
(vps) $ pagekite --clean --isfrontend --ports=8080 --domain=*:h.javier.io:passw0rd
>>> Hello! This is pagekite v0.5.6a. [CTRL+C = Stop]
This is a PageKite front-end server.
- Listening on *:8080
</pre>
<pre class="sh_sh">
(target) $ pagekite --clean --frontend=h.javier.io:8080 --service_on=http/8080:h.javier.io:localhost:8080:passw0rd
Connecting to front-end 107.161.164.253:8080 ...
- Protocols: http http2 http3 https websocket irc finger httpfinger raw
- Protocols: minecraft
- Ports: 8080
~<> Flying localhost:8080 as http://h.javier.io:8080/
</pre>
<p>And with that, your local machine will be available on Internet through your own public machine.</p>
<p>That’s it, happy flying 😋</p>
<ul>
<li><a href="https://github.com/pagekite/PyPagekite">https://github.com/pagekite/PyPagekite</a></li>
</ul>
improve boot performance in Ubuntu Precise and above2013-03-18T00:00:00+00:00http://javier.io/blog/en/2013/03/18/improve-boot-performance-in-ubuntu-1204<h2 id="improve-boot-performance-in-ubuntu-precise-and-above">improve boot performance in Ubuntu Precise and above</h2>
<h6 id="18-mar-2013">18 Mar 2013</h6>
<p>The project <a href="http://e4rat.sourceforge.net/">e4rat</a> develops tools who improve the boot process in Linux, to do so, it takes advantage of the reassignment of files in <a href="http://es.wikipedia.org/wiki/Ext4">ext4</a> if you’re not using ext4 it won’t work. It won’t work neither if you are using <a href="http://en.wikipedia.org/wiki/Solid-state_drive">solid state drives</a>, for those disks <a href="https://launchpad.net/ureadahead">ureadahead</a> (installed by default) already do a great job.</p>
<h3 id="introduction">Introduction</h3>
<p>A lot of the time who is allocated at the boot process is wasted booting and initializing hard drives (it doesn’t happen in ssds) you can see it by yourself with <a href="http://www.bootchart.org/">bootchart</a>.</p>
<p><strong><a href="/assets/img/66.png"><img src="/assets/img/66.png" alt="" /></a></strong></p>
<p>The red graph represent the time waiting for the hard drive and the blue one the time is cpu is being used.</p>
<p><strong>e4rat</strong> technique move critical files (for the booting process) alongside so these files can be loaded with minimal machinery to the hard drives. After loading them in ram, the time required to read them will drop significantly.</p>
<p><strong><a href="/assets/img/67.png"><img src="/assets/img/67.png" alt="" /></a></strong></p>
<p>This a graph of the same system after using <em>e4rat</em>.</p>
<p>The process should be repeated every time a kernel upgrade is done or when non simple updates have been applied.</p>
<h3 id="installation">Installation</h3>
<p><strong>e4rat</strong> requires at least a 2.6.31 linux kernel, in Ubuntu such kernels are distributed since Ubuntu 11.04. Fortunately the project provides .deb packages so the installation process is quite simple, grab the appropriate version for your cpu architecture from:</p>
<ul>
<li><a href="http://sourceforge.net/projects/e4rat/files">http://sourceforge.net/projects/e4rat/files</a></li>
</ul>
<p>Before installing <strong>e4rat</strong> you will need ensure <strong>ureadahead</strong> has been completely removed, to do so in Debian/Ubuntu run:</p>
<pre class="sh_sh">
$ sudo apt-get purge ureadahead
</pre>
<p>The system will ask to uninstall <strong>ubuntu-minimal</strong> too. Let it continue, ubuntu-minimal is a meta-package who doesn’t contain anything by itself, it’s useful however during the OS installation process to bring with it a bunch of packages.</p>
<p>After removing completely ureadahead, <strong>e4rat</strong> can be installed with dpkg:</p>
<pre class="sh_sh">
$ sudo dpkg -i e4rat_0.2.3_amd64.deb
</pre>
<h3 id="configuration">Configuration</h3>
<p>For <strong>e4rat</strong> to work it needs to recognize which files are been used at the boot process, to do so add the <strong>init=/sbin/e4rat-collect</strong> string to the <strong>kernel</strong> line in <strong>/boot/grub/menu.lst</strong> or the equivalent file for grub2, etc:</p>
<pre class="config">
title Ubuntu 12.04.2 LTS, kernel 3.8.2-ck1
uuid 793e9a6d-d545-46f0-ac9c-49071c450b62
kernel ... ro init=/sbin/e4rat-collect
initrd /boot/initrd.img-3.8.2-ck1
quiet
</pre>
<blockquote>
<p>Upon rebooting launch as fast as possible your most common applications (web/file browsers?, terminal emulator?, etc), e4rat will add to its index the files loaded in memory in the first 2 minutes after booting.</p>
</blockquote>
<blockquote>
<p>Review <strong>/var/lib/e4rat/startup.log</strong> to confirm it’s such information.</p>
</blockquote>
<pre class="sh_sh">
$ file /var/lib/e4rat/startup.log
/var/lib/e4rat/startup.log: ASCII text
</pre>
<h3 id="file-reallocation">File reallocation</h3>
<p>To this moment <strong>e4rat</strong> already know what files should be loaded at boot time, to relocate them reboot the system in recovery (or safe) mode.</p>
<blockquote>
<p>In my system the grub entry looks like this:</p>
</blockquote>
<pre class="config">
Ubuntu 12.04.2 LTS, kernel 3.8.2-ck1 (recovery mode)
</pre>
<blockquote>
<p>Once loaded execute <strong>e4rat-realloc</strong> several times till the software indicates there are no more improvements possible</p>
</blockquote>
<pre class="sh_sh">
# e4rat-realloc /var/lib/e4rat/startup.log
...
...
No further improvements...
</pre>
<blockquote>
<p>Replace <strong>init=/sbin/e4rat-collect</strong> with <strong>init=/sbin/e4rat-preload</strong>:</p>
</blockquote>
<pre class="config">
title Ubuntu 12.04.2 LTS, kernel 3.8.2-ck1
uuid 793e9a6d-d545-46f0-ac9c-49071c450b62
kernel ... ro plymouth:force-splash init=/sbin/e4rat-preload
initrd /boot/initrd.img-3.8.2-ck1
quiet
</pre>
<blockquote>
<p>Reboot</p>
</blockquote>
<p>Done, now the boot process time should be faster and smoother 😎</p>
<h3 id="uninstallation">Uninstallation</h3>
<p>If you find <strong>4rat</strong> too difficult to use or buggy, you can uninstall it by following the next steps:</p>
<pre class="sh_sh">
$ sudo apt-get purge e4rat
$ sudo apt-get install ubuntu-minimal ureadahead
$ sudo vim /boot/grub/menu.lst #and remove init=/sbin/e4rat-preload
</pre>
<ul>
<li><a href="http://rafalcieslak.wordpress.com/2013/03/17/e4rat-decreasing-bootup-time-on-hdd-drives">http://rafalcieslak.wordpress.com/2013/03/17/e4rat-decreasing-bootup-time-on-hdd-drives</a></li>
</ul>
dmenu with xft support2012-12-26T00:00:00+00:00http://javier.io/blog/en/2012/12/26/dmenu-xft<h2 id="dmenu-with-xft-support">dmenu with xft support</h2>
<h6 id="26-dec-2012">26 Dec 2012</h6>
<p>I love minimalist programs not ugly ones, dmenu is one of my favorite apps and I’ve just discovered a patch for making it use pretty xft fonts. So I recompiled a personal version and put it somewhere.</p>
<p><strong><a href="/assets/img/65.jpg"><img src="/assets/img/65.jpg" alt="" /></a></strong></p>
<p>If you’re interested in using this version, feel free to grab a copy (only Ubuntu LTS versions are supported):</p>
<pre class="sh_sh">
$ sudo apt-add-repository ppa:minos-archive/main
$ sudo apt-get update && sudo apt-get install dmenu
</pre>
<p>Upon installation, dmenu can be used with any xft font:</p>
<pre class="sh_sh">
$ dmenu_run -fn "Liberation Mono-8"
</pre>
<ul>
<li><a href="https://bugs.launchpad.net/ubuntu/+source/suckless-tools/+bug/1093745">LP #1093745</a></li>
</ul>
git clone only the last snapshot of a project2012-11-22T00:00:00+00:00http://javier.io/blog/en/2012/11/22/git-clone-last<h2 id="git-clone-only-the-last-snapshot-of-a-project">git clone only the last snapshot of a project</h2>
<h6 id="22-nov-2012">22 Nov 2012</h6>
<p>Git clone by default download all the data attached to a repository, there are sometimes however when I’m only interested in getting the latest snapshot. This can be done with the <strong>–depth=1</strong> option:</p>
<pre class="sh_sh">
$ git clone --depth=1 git://github.com/javier-lopez/dotfiles.git
</pre>
<p>It’s called shallow clone</p>
<ul>
<li><a href="http://stackoverflow.com/questions/1209999/using-git-to-get-just-the-latest-revision">http://stackoverflow.com/questions/1209999/using-git-to-get-just-the-latest-revision</a></li>
</ul>
dmenu for everything2012-11-18T00:00:00+00:00http://javier.io/blog/en/2012/11/18/dmenu-for-everything<h2 id="dmenu-for-everything">dmenu for everything</h2>
<h6 id="18-nov-2012">18 Nov 2012</h6>
<p>I love minimalism systems and programs who focus in doing a single task very well, <a href="http://tools.suckless.org/dmenu/">dmenu</a> is one of them, it reads input from user, matches patterns and returns results, simple! With this functionality it can (ab)used to create launchers for almost anything, let’s review how to create a virtualbox launcher…</p>
<p>The first step (and the hardest) is to figure out how to create an option list to present in screen, on this example vbox machines:</p>
<pre class="sh_sh">
$ vboxmanage list vms | cut -d\" -f2
</pre>
<p>Once defined, it’s easy to come up with missing parts:</p>
<pre class="sh_sh">
DMENU='dmenu -p > -i -nb #000000 -nf #ffffff -sb #000000 -sf #3B5998'
vboxmachine="$(vboxmanage list vms | cut -d\" -f2 | $DMENU)"
[ -z "${vboxmachine}" ] && exit 0 || vboxmanage -q startvm "$vboxmachine" --type gui
</pre>
<p>Everything in 3 LOC!, this script can now be saved at <strong>/usr/local/bin/</strong> and used as a shortcut, in my use case, I added it to <strong>~/.i3/config</strong>:</p>
<pre>
# vbox:
bindsym $Altgr+v exec dmenu_vbox
</pre>
<p>So now I can launch vbox machines by pressing <strong>Altrg + v</strong> and select the appropiated machine, if you liked dmenu as much as I did, I’ve made a handful of scripts to control music, user sessions, apps, etc. Feel free to grab them at:</p>
<ul>
<li><a href="https://github.com/javier-lopez/learn/tree/master/sh/tools">https://github.com/javier-lopez/learn/tree/master/sh/tools</a></li>
</ul>
<p><strong><a href="/assets/img/61.png"><img src="/assets/img/61.png" alt="" /></a></strong>
<strong><a href="/assets/img/62.png"><img src="/assets/img/62.png" alt="" /></a></strong>
<strong><a href="/assets/img/63.png"><img src="/assets/img/63.png" alt="" /></a></strong>
<strong><a href="/assets/img/64.png"><img src="/assets/img/64.png" alt="" /></a></strong></p>
um lugar pertinho do céu2012-11-17T00:00:00+00:00http://javier.io/blog/pt/2012/11/17/um-lugar-pertinho-do-ceu<h2 id="um-lugar-pertinho-do-céu">um lugar pertinho do céu</h2>
<h6 id="17-nov-2012">17 Nov 2012</h6>
<p>A época de ouro do cinema mexicano consagrou uma das figuras mais importantes do México, o imortal Pedro Infante.</p>
<p>Pedro Infante foi um ator e cantor mexicano, nasceu em Sinaloa, hoje um dos estados mais perigosos do pais, e morreu em um acidente de aviação aos 39 anos de idade.</p>
<p>Em um lugar pertinho do céu, Pedro Infante atua no papel de Pedro Gonzáles, uma coisa curiosa dos filmes daqueles tempos é que os nomes fictícios pareciam-se muito com os verdadeiros, um imigrante que chega à cidade com muitas esperanças de conseguir um bom trabalho, mas que fica preso nas garras do sistema moderno.</p>
<p>Um excelente filme, em caso você não seja sensível demais</p>
<div id="youtube">
<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/LtpCUf_Spu0?version=3&hl=en_US" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/LtpCUf_Spu0?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
dez coisas do que eu gosto2012-10-18T00:00:00+00:00http://javier.io/blog/pt/2012/10/18/dez-coisas-do-que-eu-gosto<h2 id="dez-coisas-do-que-eu-gosto">dez coisas do que eu gosto</h2>
<h6 id="18-oct-2012">18 Oct 2012</h6>
<p>Uma lista nao extensiva:</p>
<ul>
<li>Dormir o dia todo</li>
<li>Comer pepinos</li>
<li>Jogar com minhas orelhas</li>
<li>Tomar banho até que os meus dedos fiquem enrugados</li>
<li>Ler um livro de contos</li>
<li>Jogar xadrex ou futebol</li>
<li>Recever presentes</li>
<li>Ficar sozinho em um lugar que nao conheca</li>
<li>Navegar na Internet</li>
<li>Beijar garotas</li>
</ul>
tropa de elite2012-08-19T00:00:00+00:00http://javier.io/blog/pt/2012/08/19/tropa-de-elite<h2 id="tropa-de-elite">tropa de elite</h2>
<h6 id="19-aug-2012">19 Aug 2012</h6>
<p>A alguns meses assisti a um filme chamado tropa de elite. É a historia de um homem que lidera um grupo fortemente armado e que sobe aos morros para matar os narcotraficantes. Gostei do filme, porque acredito que as pessoas podem olhar o outro lado da polícia, os homens honestos e com vontade de melhorar as coisas.</p>
<p>O filme tambem faz um retrato interessante da sociedade do estado do Rio de Janeiro, acredito que como no Mexico, lá a profissao precisa ser mais valorizada e os policiais melhor treinados. Da segunda parte quase nao gostei, tem tintes politicos e o capitao Nascimiento ja nao é um homem normal, mais é como um superman. Quais sao os seus filmes favoritos?</p>
<h3 id="tropa-de-elite-1">Tropa de elite 1</h3>
<div id="youtube">
<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/Jz2DadDoRjg?hl=en_US&version=3" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/Jz2DadDoRjg?hl=en_US&version=3" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<h3 id="tropa-de-elite-2">Tropa de elite 2</h3>
<div id="youtube">
<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/XL3ybRR1oE0?version=3&hl=en_US" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/XL3ybRR1oE0?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
kernel -ck for ubuntu precise2012-07-03T00:00:00+00:00http://javier.io/blog/en/2012/07/03/kernel-ck-for-ubuntu-1204<h2 id="kernel--ck-for-ubuntu-precise">kernel -ck for ubuntu precise</h2>
<h6 id="03-jul-2012">03 Jul 2012</h6>
<p><strong>UPDATE: 16/Jul/2014, the script was updated to compile the 3.15.5 kernel version</strong></p>
<p><strong><a href="http://ck-hack.blogspot.mx/">ck</a></strong> is the name for the Con Kolivas patchet which main purpose is to increment the performance for Linux in PC’s and laptops. Traditionally the kernel comes with a lot of things for enterprise environments, that’s why this patchset have some relative popularity with people who wants to improve their machine for games, multimedia and tradicional work (browsing the web, editing texts, im, etc).</p>
<p>The steps to compile a kernel with these modifications are:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Download the vanilla kernel
Download and apply the: -bfq, -ck patchsets
Configure the kernel
Compile
Install
</code></pre></div></div>
<p>Fortunately some users at ubuntu-br.org have been following the -ck branch, close enough to create a script that automatize the process:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Kernel Omnislash (Unofficial) – Aprendendo a voar sem segredos!!! (learning to fly without secrets)
http://sourceforge.net/projects/scriptkernel/files/
</code></pre></div></div>
<p>After check it out, I’ve edited it (to avoid some errors and to add some bells and whistles) and I’ve put the result in: https://github.com/javier-lopez/learn/blob/master/sh/is/kernel-ck-ubuntu
The idea is that from time to time I check the script to see that it compiles the last -ck patchset version for the last Ubuntu LTS version. If you want to try it, run the following commands:</p>
<pre class="sh_sh">
$ wget https://raw.github.com/javier-lopez/learn/master/sh/is/kernel-ck-ubuntu
$ time sh kernel-ck-ubuntu
$ sudo dpkg -i ./linux-*.deb
</pre>
<p><strong><a href="/assets/img/59.png"><img src="/assets/img/59.png" alt="" /></a></strong></p>
<p>And reboot your system, if you don’t want to compile it yourself, I’ve build some .deb packages for amd64 and x86 😇</p>
<p><em>3.4.5</em></p>
<ul>
<li><a href="http://f.javier.io/rep/deb/3.4.5-ck-amd64.tar.bz2">amd64</a></li>
<li><a href="http://f.javier.io/rep/deb/3.4.5-ck-i386.tar.bz2">x86</a></li>
</ul>
<p><em>3.7.1</em></p>
<ul>
<li><a href="http://f.javier.io/rep/deb/3.7.1-ck-i386.tar.bz2">amd64</a></li>
<li><a href="http://f.javier.io/rep/deb/3.7.1-ck-amd64.tar.bz2">x86</a></li>
</ul>
<p><em>3.8.2</em></p>
<ul>
<li><a href="http://f.javier.io/rep/deb/3.8.2-ck-amd64.tar.bz2">amd64</a></li>
<li><a href="http://f.javier.io/rep/deb/3.8.2-ck-i386.tar.bz2">x86</a></li>
</ul>
<p><em>3.9.2</em></p>
<ul>
<li><a href="http://f.javier.io/rep/deb/3.9.2-ck-amd64.tar.bz2">amd64</a></li>
<li><a href="http://f.javier.io/rep/deb/3.9.2-ck-i386.tar.bz2">x86</a></li>
</ul>
<p><em>3.11.7</em></p>
<ul>
<li><a href="http://f.javier.io/rep/deb/3.11.7-ck-amd64.tar.bz2">amd64</a></li>
<li><a href="http://f.javier.io/rep/deb/3.11.7-ck-i386.tar.bz2">x86</a></li>
</ul>
<p><em>3.12.1</em></p>
<ul>
<li><a href="http://f.javier.io/rep/deb/3.12.1-ck-amd64.tar.bz2">amd64</a></li>
<li><a href="http://f.javier.io/rep/deb/3.12.1-ck-i386.tar.bz2">x86</a></li>
</ul>
<p><em>3.13.7</em></p>
<ul>
<li><a href="http://f.javier.io/rep/deb/3.13.7-ck-i386.tar.bz2">amd64</a></li>
<li><a href="http://f.javier.io/rep/deb/3.13.7-ck-amd64.tar.bz2">x86</a></li>
</ul>
<p><em>3.14.4</em></p>
<ul>
<li><a href="http://f.javier.io/rep/deb/3.14.4-ck-amd64.tar.bz2">amd64</a></li>
<li><a href="http://f.javier.io/rep/deb/3.14.4-ck-i386.tar.bz2">x86</a></li>
</ul>
<p><em>3.15.5</em></p>
<ul>
<li><a href="http://f.javier.io/rep/deb/3.15.5-ck-amd64.tar.bz2">amd64</a></li>
<li><a href="http://f.javier.io/rep/deb/3.15.5-ck-i386.tar.bz2">x86</a></li>
</ul>
latex and me2012-05-15T00:00:00+00:00http://javier.io/blog/en/2012/05/15/latex-and-me<h2 id="latex-and-me">latex and me</h2>
<h6 id="15-may-2012">15 May 2012</h6>
<p>I maintain my CV in latex because it can easily generate different outputs, it’s easy to modify and I think it gives extra geeky points. In the other hand not always is easy to compile.., so I’ll write down the process to not forget it and do it faster next time.</p>
<p><a href="http://www.sharepdfbooks.com/ZZKLWWMPNYPU/template_banking_black.pdf.html"><img src="/assets/img/47.png" alt="" /></a></p>
<pre class="sh_sh">
$ apt-get install texlive-latex-base texlive-latex-extra latex-xcolor texlive-fonts-recommended
$ wget http://mirror.ctan.org/macros/latex/contrib/moderncv.zip
$ unzip moderncv.zip
$ sudo mv moderncv /usr/share/texmf-texlive/tex/latex/
$ sudo mktexlsr
</pre>
<p>After installing the dependencies and latex itself, the compilation process can be triggered as follows:</p>
<pre class="sh_sh">
$ latex cv.tex
</pre>
<p>The resulting .dvi file can then be converted to a pdf file:</p>
<pre class="sh_sh">
$ dvipdfm cv.dvi
</pre>
<p><a href="https://gist.github.com/2704079"><img src="/assets/img/48.png" alt="" /></a></p>
<p>If it seems like too much work, it can also be compiled <a href="https://www.sharelatex.com">online</a> 😁</p>
rm wrapper2012-03-19T00:00:00+00:00http://javier.io/blog/en/2012/03/19/rm-wrapper<h2 id="rm-wrapper">rm wrapper</h2>
<h6 id="19-mar-2012">19 Mar 2012</h6>
<p>Sometimes when I run:</p>
<pre class="sh_sh">
$ rm foo
</pre>
<p>I realize I didn’t mean it, so with this in my mind I made a little <a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/trash">wrapper</a> around rm, now, when I remove files, they’re send them to the trash bin, it’s compatible with nautilus/pcmanfm.</p>
<p>Example: If I run from a terminal <strong>$ rm img.png</strong> I can then go to the Trash carpet in Nautilus and restore it. If I delete an item with Nautilus (by pressing the <strong>Supr</strong> button) I can open a terminal and type <strong>$ rm -u img.png</strong> and get back my stuff.</p>
<p><strong><a href="/assets/img/53.png"><img src="/assets/img/53.png" alt="" /></a></strong></p>
<p>If you want use the script, download and move it to <strong>/usr/local/bin</strong>, then you can use as <strong>rm_</strong> or even override rm through and alias defined in the <strong>~/.bashrc</strong> file:</p>
<pre class="sh_sh">
$ alias rm='trash'
</pre>
<!--<iframe class="showterm" src="http://showterm.io/0a5b334fd24f82bd5ede1" width="640" height="350"> </iframe> -->
<p>😈</p>
online partners2012-03-11T00:00:00+00:00http://javier.io/blog/en/2012/03/11/online-partners<h2 id="online-partners">online partners</h2>
<h6 id="11-mar-2012">11 Mar 2012</h6>
<p>For one reason or another I find more reliable remote people, I mean, whatever I make a new friendship it’s almost always the case that if I made it online it will last more time and will often of better quality, I suppose that’s because when you’re behind a screen, your acts are premeditated, you think better what you say and how to react to what people say to you. There are no “I’m moving somewhere else”, because there is no place without Internet!, and depending on your Internet addiction you can even end talking more with them than with other local people.</p>
<p>Nothing, I appreciate online partners 😌</p>
out of office2012-03-03T00:00:00+00:00http://javier.io/blog/en/2012/03/03/out-of-office<h2 id="out-of-office">out of office</h2>
<h6 id="03-mar-2012">03 Mar 2012</h6>
<p>I’ve just finished some <a href="http://www.youtube.com/user/ugjmexico">videos</a> for the UGJ, I spend almost all my day walking in Mexico city downtown looking at buildings and buying stuff, I took a cup of coffe with <a href="http://xakemix.wordpress.com/">Akemi</a> and shaved my beard before going to bed 😢</p>
with good luck!2012-02-24T00:00:00+00:00http://javier.io/blog/en/2012/02/24/with-good-luck<h2 id="with-good-luck">with good luck!</h2>
<h6 id="24-feb-2012">24 Feb 2012</h6>
<p>I hate banks!, today I lost my morning trying to use paypal through my <a href="http://www.banorte.com/">banorte</a> account, however at the end I was able to buy my gift for <a href="http://mwkdoll.blogspot.com/">Mwkdoll</a> =)! (a domain name) I hope she like it, she is one of the smartest girls I know…, I’ve also found a little jewel “<a href="http://www.estantevirtual.com.br/formaseletras/Hans-Staden-Meu-Cativeiro-Entre-os-Selvagens-do-Brasil-48243284">meu cativeiro entre os selvagens do brasil</a>” (my captivity among the savages of Brazil) from “hans staden”, I’ve looked at most of the biggest libraries here and I’ve only found 2 portuguese books, this one in the street for only 1 dollar!! yeeei.</p>
<p>Good luck!</p>
clipboard synchronization between X11 and gnome apps2012-02-20T00:00:00+00:00http://javier.io/blog/en/2012/02/20/clipboard-synchronization-x-gnome<h2 id="clipboard-synchronization-between-x11-and-gnome-apps">clipboard synchronization between X11 and gnome apps</h2>
<h6 id="20-feb-2012">20 Feb 2012</h6>
<p>By default X11 powered systems have at least <a href="http://en.wikipedia.org/wiki/X_Window_selection#Clipboard">two different clipboards</a> which may cause confusion sometimes 😖</p>
<p>There is no way to disable/delete them, so the next best solution is to synchronizate them. <a href="http://www.nongnu.org/autocutsel/">Autocutsel</a> is a free cli utility who can do this. It works by adding it to the <strong>~/.xsession</strong> file or any other initialization file your windows system execute:</p>
<pre class="sh_sh">
$ autocutsel -fork #sync between X and Gnome apps
$ autocutsel -selection PRIMARY -fork #sync between Gnome apps and X
</pre>
<p>Happy copy/pasting ☻</p>
bash autocompletion2012-01-01T00:00:00+00:00http://javier.io/blog/en/2012/01/01/bash-autocompletion<h2 id="bash-autocompletion">bash autocompletion</h2>
<h6 id="01-jan-2012">01 Jan 2012</h6>
<!--**[![](/assets/img/54.jpg)](/assets/img/54.jpg)**-->
<p><strong>Update:</strong> It’s highly recommended to upgrade <a href="https://viajemotu.wordpress.com/2013/10/16/upgrade-to-bash-completion-2-0/">bash-completion</a> to version >= 2.0 for an improved performance.</p>
<p>I really like minimalism systems (and cli apps), they are faster, more stable and easier to control. I think it’s pretty cool to be able to write a command and get without hesitation a result (I’m aware I’m probably already deprecated, in a world where touch and gui applications are the norm, who would still prefer text based systems?). Sadly many of these commands are not specially user friendly, they contain tons of options and sometimes these options are quite hard to write correctly, when you download scripts from Internet it gets worse, all options must be written by hand because the lack autocompletion.</p>
<p>Many people don’t realize this autocompletion magic work by programming simple bash scripts, so I decided to write a few notes about the process.</p>
<h3 id="introduction">Introduction</h3>
<p>Bash triage autocompletion every time an user press <strong><Tab><Tab></strong>, for most simple commands a single call to <strong>complete</strong> would be enough to generate correct alternatives. Let’s suppose <strong>foo</strong> is a command who only take directories as arguments, the autocompletion logic can be described as:</p>
<pre class="sh_sh">
$ complete -o plusdirs foo
</pre>
<p>From then on <strong>$ foo <Tab><Tab></strong> will return a directory list, that one was easy 😉 Now, let’s give more examples:</p>
<pre class="sh_sh">
$ complete -A user bar #will autocomplete bar with a list of system users
$ complete -W "-v --verbose -h" wop #will autocomplete wop with "-v", "--verbose" and "-h"
$ complete -f -X '!*.[pP][dD][fF]' evince foo #will autocomplete evince and foo with all pdf files
</pre>
<p>The full syntax for <strong>complete</strong> can be reviewed in the bash help, <strong>$ man bash</strong></p>
<h3 id="function-based">Function based</h3>
<p>One of the options <strong>complete</strong> accept is <strong>-F</strong> who calls a function, this function can be programmed at any length ✌ e.g:</p>
<pre class="sh_sh">
$ source file_where_pump_function_is_defined
$ complete -F \_pump pump
</pre>
<p>Now whenever pump is typed followed by <Tab><Tab> <strong>_primp()</strong> will be called and will require to fill the <a href="http://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html">COMPREPLY</a> array.</p>
<p>Most of the files who contain these functions live in <strong>/etc/bash_completion.d/</strong> and in recent years in <strong>/usr/share/bash-completion/completions/</strong> (in Debian/Ubuntu systems), let’s suppose we’ve the following hand made script:</p>
<pre class="sh_sh">
$ fix -h
Usage: fix module
-h or --help List available arguments and usage (this message).
-v or --version print version.
apache poves /etc/init.d/apache2.1 to apache2.
ipw2200 restart the ipw2200 module.
wl restart the wl module.
iwlagn restart the iwlagn module.
mpd restart mpd.
</pre>
<p>The autocompletion logic can be defined in two ways, the easiest one would be to dump the following line in <strong>/etc/bash_completion.d/fix.autocp</strong>:</p>
<pre class="sh_sh">
$ complete -W "-h --help -v --version apache ipw2200 wl iwlagn mpd" fix
</pre>
<p>After doing, it would be necessary to reload the environment:</p>
<pre class="sh_sh">
$ source $HOME/.bashrc
</pre>
<p>WARNING: autocompletion will only work if it’s initialized in <strong>$HOME/.bashrc</strong> or other files read by bash (it also must be installed $ sudo apt-get install bash-completion):</p>
<pre class="sh_sh">
if [ -f /etc/bash_completion ]; then
source /etc/bash_completion
fi
</pre>
<p>The more elaborated case would involve defining a function, let’s replace the <strong>/etc/bash_completion.d/fix.autocp</strong> content with this:</p>
<pre class="sh_sh">
\_fix()
{
if ! command -v "fix" >/dev/null 2>&1; then
return
fi
#defining local vars
local cur prev words cword
_init_completion || return #comment this line for bash-completion <2.0
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
COMPREPLY=() #clean out last completions, important!
COMMANDS="apache ipw2200 wl iwlagn mpd"
OPTS="-h --help -v --version"
case "${cur}" in #if the current word have a '-' at the beginning..
-*) completions="${OPTS}" ;;
*) completions="${COMMANDS}" ;;
esac
COMPREPLY=($(compgen -W "${completions}" -- ${cur}))
return 0
}
complete -F \_fix fix
</pre>
<p>The <strong>$cur</strong> variable is important, parameters would be compared against it, in more complex examples, <strong>$prev</strong> and even <strong>$prev_prev</strong> can be compared.</p>
<p>To generate option lists, <strong>compgen</strong> is used regularly, this command compare and return matched results as a list, to wrap it up, here are some examples:</p>
<pre class="sh_sh">
$ compgen -W "-v --verbose -h --help" -- "-v"
-v
$ compgen -W "-v --verbose -h --help" -- "--"
--verbose
--help
$ compgen -W "apache ipw2200 iwlagn mpd wl" -- "ap"
apache
$ compgen -W "apache ipw2200 iwlagn mpd wl" -- "i"
ipw2200
iwlagn
</pre>
<p>Once this two step process is understood it’s easy to see how most autocomplation scripts work. I’ll review a more complex example, <strong>android</strong>:</p>
<pre class="sh_sh">
$ android -h
Usage: android [global options] action [action options]
Global options:
-v --verbose Verbose mode: errors, warnings and informational messages are printed.
-h --help Help on a specific command.
-s --silent Silent mode: only errors are printed out.
Valid actions are composed of a verb and an optional direct object:
- list
- list avd
- list target
- create avd
- move avd
- delete avd
- update avd
- create project
- update project
- create test-project
- update test-project
- create lib-project
- update lib-project
- update adb
- update sdk
</pre>
<p>As it can be noted most options depend of a previous command, “avd” should only be returned when list is used as an action:</p>
<pre class="sh_sh">
$ android list[Tab][Tab]
</pre>
<p>And <strong>avd</strong>/<strong>target</strong> should be returned when no substring is present after list</p>
<pre class="sh_sh">
$ android create[Tab][Tab]
</pre>
<p>Should return <strong>avd</strong>, <strong>project</strong>, <strong>test-project</strong> and <strong>lib-project</strong>:</p>
<pre class="sh_sh">
$ android create avd[Tab][Tab]
</pre>
<p>And <strong>-a</strong>, <strong>-c</strong>, <strong>-f</strong>, etc, should be returned when avd and create are the first parameters. The full autocompletion file for this example is located in <a href="https://github.com/javier-lopez/learn/blob/master/autocp/completions/android">github</a>, I’ll explain now the more important parts:</p>
<pre class="sh_sh">
\_android()
{
if ! command -v "android" >/dev/null 2>&1; then
return
fi
COMPREPLY=() #clean out last completions, important!
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
number_of_words=${#COMP_WORDS[@]}
if [ "${number_of_words}" -gt "2" ]]; then
prev_prev="${COMP_WORDS[COMP_CWORD-2]}"
fi
</pre>
<p>A <strong>prev_prev</strong> variable is declared only when two or more arguments are written in the prompt.</p>
<pre class="sh_sh">
#=======================================================
# General options
#=======================================================
COMMANDS="list create move delete update"
#COMMANDS=`android -h | grep '^-' | sed -r 's/: .*//' \
| awk '{print $2}' | sort | uniq 2> /dev/null`
</pre>
<p>List options can be declared fixed or generated at run time by parsing help screens, depending of the command you can use whenever method feels more comfortable.</p>
<pre class="sh_sh">
OPTS="-h --help -v --verbose -s --silent"
#=======================================================
# Nested options [1st layer]
#=======================================================
list_opts="avd target"
create_opts="avd project test-project lib-project"
move_opts="avd"...
</pre>
<p>The same can be set for subcommands</p>
<pre class="sh_sh">
#=======================================================
# Nested options [2nd layer]
#=======================================================
create_avd_opts="-c --sdcard -t --target -n --name -a \
--snapshot -p --path -f -s --skin"
create_project_opts="-n --name -t --target -p --path -k \
--package -a --activity"
create_test-project_opts="-p --path -m --main -n --name"
create_lib-project_opts="-n --name -p --path -t --target \
-k --package"...
</pre>
<p>And subcommand options…</p>
<pre class="sh_sh">
if [ -n "${prev_prev}" ]; then
#2nd layer
case "${prev_prev}" in
create)
case "${prev}" in
avd)
COMPREPLY=($(compgen -W "${create_avd_opts}" -- ${cur}))
return 0
;;
project)
COMPREPLY=($(compgen -W "${create_project_opts}" -- ${cur}))
return 0
;;
...
esac
</pre>
<p>Depending in <strong>$prev_prev</strong>, <strong>$prev</strong> and <strong>$cur</strong> the correct list will be return, <strong>$ android subcomand option incomplete_option#CURSOR#</strong></p>
<pre class="sh_sh">
case "${prev}" in
##1st layer
list)
COMPREPLY=($(compgen -W "${list_opts}" -- ${cur}))
return 0
;;
create)
COMPREPLY=($(compgen -W "${create_opts}" -- ${cur}))
return 0
;;
...
</pre>
<p><strong>$ android subcommand incomplete_option#CURSOR#</strong></p>
<pre class="sh_sh">
#general options
case "${cur}" in
-*)
COMPREPLY=($(compgen -W "${OPTS}" -- ${cur}))
;;
*)
COMPREPLY=($(compgen -W "${COMMANDS}" -- ${cur}))
;;
esac
}
complete -F \_android android
</pre>
<p><strong>$ android incomplete_subcommand#CURSOR#</strong></p>
<h3 id="extra">Extra</h3>
<h4 id="available-functions">Available functions</h4>
<p>There exist plenty of available functions who can be used to autocomplete commonly used options, for example, if a command accepts a <strong>-f</strong> option for file arguments, the <strong>_filedir</strong> function can be used:</p>
<pre class="sh_sh">
-f)
\_filedir
return 0
;;
</pre>
<p>Other pre-defined functions can be found at: <a href="http://anonscm.debian.org/gitweb/?p=bash-completion/bash-completion.git;a=blob;f=bash_completion">http://anonscm.debian.org/gitweb/?p=bash-completion/bash-completion.git;a=blob;f=bash_completion</a></p>
<h3 id="debug">Debug</h3>
<p>Bash autocompletion scripts are easy to create, however eventually (specially with larger cli commands) there are chances things doesn’t work as expected, in those cases enabling bash verbose mode is the easiest and faster method to debug such scripts:</p>
<pre class="sh_sh">
$ set -x
$ source ~/.bashrc #reload the environment
$ command opc[Tab][Tab] #testing the autocompletion
...
...
... verbose output
</pre>
<h3 id="examples">Examples</h3>
<p>There are many bash completion scripts in <a href="http://anonscm.debian.org/gitweb/?p=bash-completion/bash-completion.git;a=tree;f=completions">Internet</a> and <a href="https://github.com/javier-lopez/learn/tree/master/autocp/completions">some others</a> in my personal repository. Looking at examples is probably the easiest way to learn the harder details.</p>
<h3 id="final-thoughts">Final thoughts</h3>
<p>Bash autocompletion may seems scary at the beginning but once several examples are read a clear pattern can be dazzled, depending on your system usage they can save you a lot of time/typing, so next time you find yourself writing to much give them a shot and let the computer do the job for you 😊</p>
<ul>
<li><a href="http://bash-completion.alioth.debian.org/">http://bash-completion.alioth.debian.org/</a></li>
</ul>
soporte técnico de steren2011-12-11T00:00:00+00:00http://javier.io/blog/es/2011/12/11/soporte-tecnico-steren<h2 id="soporte-técnico-de-steren">soporte técnico de steren</h2>
<h6 id="11-dec-2011">11 Dec 2011</h6>
<p>Nada, hace tiempo que leo el blog de Manuel Micheline (<a href="http://la-morsa.blogspot.com/">La morsa</a>) un catedrático de la facultad de ciencias de la UNAM / maestro FIDE en ajedrez y programador amateur, y me ha dado por crear una entrada imitando un poco su estilo.</p>
<p>Hoy por la mañana me llego un router Steren (<a href="http://www.steren.com.mx/_files/search.asp?s=COM-840\">COM-840)</a> que será usado para extender la señal de la red en las instalaciones donde trabajo, cuando lo vi me sorprendí porque no tenia idea que los comercializaban, en fin, estuve leyendo su instructivo y encontré un error en el manual que impedía entrar a la interfaz web, esto era debido a una mala traducción técnica, debo aclarar que no leí todo el manual, me limité a terminar de configurar el equipo y metí las cosas en su lugar, sin embargo al hacer esto noté que en la parte trasera del manual tenía una leyenda que rezaba: <strong>“Este instructivo puede mejorar con tu ayuda, llamanos a XXX-XXX-XXX-XXX”</strong>.</p>
<p>Así que cogí el teléfono y marqué, la espera fue mínima y me atendió una señorita de soporte técnico, le comente el caso y tomo la corrección en menos de 5 min. Nada, me pareció profesional la manera en la que manejaron el suceso, y aunque no tengo mucha esperanza en el router, espero que también me sorprenda.</p>
print through the ldp protocol in a cups less environment2011-12-01T00:00:00+00:00http://javier.io/blog/en/2011/12/01/ldp-printer-cups-less<h2 id="print-through-the-ldp-protocol-in-a-cups-less-environment">print through the ldp protocol in a cups less environment</h2>
<h6 id="01-dec-2011">01 Dec 2011</h6>
<p>I’ve just discovered it’s possible to print through the <a href="http://en.wikipedia.org/wiki/Line_Printer_Daemon_protocol">LDP protocol</a> without <a href="http://www.cups.org/">CUPS</a> (cups-ldp in Ubuntu).</p>
<p>For a unkown reason I had always though that all linux systems required to have CUPS installed to talk to any printer, which is not the case. LPD can be used perfectly to print to remote printers.</p>
<pre class="sh_sh">
$ rlpr -h -Plp -HIP_OF_THE_PRINTER_LDP_SERVER_HERE file.[ps|pdf]
</pre>
<p>In 2011 most Linux application can print to pdf/ps files, so you can skip <strong>cups-pdf</strong> as well 😊. To print images, <strong>convert</strong> (part of the <a href="http://www.imagemagick.org/script/index.php">imagemagick</a> suit) can do the job:</p>
<pre class="sh_sh">
$ convert image.jpg file.ps
</pre>
<p>And for plain text files, vim shines:</p>
<pre class="sh_sh">
:hardcopy > file.ps
</pre>
<p>Aliases can help in case you find yourself typing the ip too often, eg <strong>alias print.192.168.1.11=’rlpr -h -Plp -H192.168.1.11’</strong>:</p>
<pre class="sh_sh">
$ print.192.168.1.11 file[.ps|pdf]
</pre>
<p>All my requirements are covered 😎</p>
<p>The printer/copier referenced on this post is the <a href="http://usa.canon.com/cusa/support/office/b_w_imagerunner_copiers/imagerunner_5050_5055_5065_5070_5075_5570_6570/imagerunner_6570">Canon ImageRunner 6570</a>:</p>
<ul>
<li><a href="http://www.mail-archive.com/misc@openbsd.org/msg56753.html">http://www.mail-archive.com/misc@openbsd.org/msg56753.html</a></li>
<li><a href="http://www.gnu.org/software/a2ps">http://www.gnu.org/software/a2ps</a></li>
<li><a href="http://www.wizards.de/~frank/pstill.html">http://www.wizards.de/~frank/pstill.html</a></li>
<li><a href="http://www.ghostscript.com/">http://www.ghostscript.com/</a></li>
</ul>
deb package cache2011-11-18T00:00:00+00:00http://javier.io/blog/en/2011/11/18/deb-packages-cache<h2 id="deb-package-cache">deb package cache</h2>
<h6 id="18-nov-2011">18 Nov 2011</h6>
<p><strong>Update:</strong> I created a <a href="https://raw.github.com/javier-lopez/learn/master/sh/is/apt-proxy">script</a> who automate the process described in this post.</p>
<!--<iframe class="showterm" src="http://showterm.io/cfdfdda6da61dad9d9d5e" width="640" height="350"> </iframe>-->
<h3 id="introduction">Introduction</h3>
<p>apt-cacher-ng is a kind of deb repository proxy, it caches deb packages <strong>on demand</strong> between the computer who share the cache, it’s a great alternative for small environments. There are other alternatives, such as apt-cacher, apt-proxy and debmirror but those solutions can take more space or be harder to setup, so I won’t talk about them.</p>
<p>In the client side, there exist mainly two ways of taking advantage of such services, <a href="https://launchpad.net/squid-deb-proxy">squid-deb-proxy</a> and manual configuration. The first one uses <a href="http://avahi.org/">zeroconf</a> to detect and use deb proxies whenever they’re available and the second one, well, is manual and works by adding a string to <strong>/etc/apt/apt.conf.d/01apt-cache</strong> describing the proxy url. On this post I’ll talk about squid-deb-proxy.</p>
<h3 id="installation">Installation</h3>
<p>[+] In the server side (which can be client at the same time):</p>
<pre class="sh_sh">
$ sudo apt-get install apt-cacher-ng squid-deb-proxy-client
$ sudo wget http://javier.io/mirror/apt-cacher-ng.service -O /etc/avahi/services/apt-cacher-ng.service
$ sudo service apt-cacher-ng restart
</pre>
<p>[+] In the client side:</p>
<pre class="sh_sh">
$ sudo apt-get install squid-deb-proxy-client
</pre>
<p>After executing these commands the apt-cacher-ng server will announce itself to all computers in the local network, and clients machines will be able to autoconfigure their apt preferences depending of whether they see an apt-cacher-ng server or not. Pretty cool 😊</p>
<h3 id="extra">Extra</h3>
<h4 id="import-packages">Import packages</h4>
<p>Old packages (downloaded before setting up apt-cacher-ng) can be imported by executing:</p>
<pre class="sh_sh">
$ sudo mkdir -pv -m 2755 /var/cache/apt-cacher-ng/_import
$ sudo mv -vuf /var/cache/apt/archives/*.deb /var/cache/apt-cacher-ng/_import/
$ sudo chown -R apt-cacher-ng:apt-cacher-ng /var/cache/apt-cacher-ng/_import
$ sudo apt-get update
</pre>
<p>And going to <a href="http://localhost:3142/acng-report.html">http://localhost:3142/acng-report.html</a> where a ‘<strong>Start import</strong>’ button will show up:</p>
<p><strong><a href="/assets/img/57.png"><img src="/assets/img/57.png" alt="" /></a></strong></p>
<h4 id="delete-apt-cacher-ng">Delete apt-cacher-ng</h4>
<p>This setup can be destroyed at any time by running:</p>
<p>[+] In the server:</p>
<pre class="sh_sh">
$ sudo apt-get remove apt-cacher-ng squid-deb-proxy-client
$ sudo rm -rf /var/cache/apt-cacher-ng
</pre>
<p>[+] In the clients:</p>
<pre class="sh_sh">
$ sudo apt-get remove squid-deb-proxy-client
</pre>
<p>Happy caching 😏</p>
you just got kernelroll'd ;)2011-11-09T00:00:00+00:00http://javier.io/blog/en/2011/11/09/you-just-got-kernelrolled<h2 id="you-just-got-kernelrolld-">you just got kernelroll’d ;)</h2>
<h6 id="09-nov-2011">09 Nov 2011</h6>
<p><strong><a href="/assets/img/56.png"><img src="/assets/img/56.png" alt="" /></a></strong></p>
<p>Rickrollin in kernel space ☺, this hack will intercept any system call to open multimedia files and replace them with rickrolling.mp3 😉</p>
<p>To set it up in Ubuntu 10.04 you’ll need systemtap:</p>
<pre class="sh_sh">
$ sudo apt-get install systemtap
</pre>
<p>Systemtap requires the kernel <a href="http://en.wikipedia.org/wiki/Debug_symbol">debug symbols</a> who <a href="https://bugs.launchpad.net/ubuntu/+source/linux/+bug/289087">cannot be installed</a> from the repositories in <strong>lucid</strong>, although they can be installed from <a href="http://ddebs.ubuntu.com/pool/main/l/linux/">http://ddebs.ubuntu.com/pool/main/l/linux/</a>.</p>
<p>In this particular case I’ve installed the 2.6.32 kernel:</p>
<pre class="sh_sh">
$ sudo dpkg -l|grep linux-image
ii linux-image-2.6.32-34-generic
$ uname -m
x86_64
</pre>
<p>Therefore I’ll download the following files(~450MB):</p>
<pre class="sh_sh">
$ wget http://ddebs.ubuntu.com/pool/main/l/linux/linux-image-2.6.32-34-generic-dbgsym_2.6.32-34.77_amd64.ddeb
$ sudo dpkg -i linux-image-2.6.32-34-generic-dbgsym_2.6.32-34.77_amd64.ddeb
</pre>
<p>Upon completion, the hack can be enabled this way:</p>
<pre class="sh_sh">
$ sudo stap -e 'probe kernel.function("do_filp_open")\
{ p = kernel_string($pathname); l=strlen(p); \
ext = substr(p, l - 4, l); if (ext == ".mp3" || ext == ".ogg" \
|| ext == ".mp4") { system("mplayer /path/to/rirckroll.mp3"); }}'
</pre>
<p>If you’re curious about other stap user cases, take a look at the documentation:</p>
<ul>
<li><a href="http://sources.redhat.com/systemtap/">http://sources.redhat.com/systemtap/</a></li>
</ul>
compile software in pristine environments with pbuilder2011-11-09T00:00:00+00:00http://javier.io/blog/en/2011/11/09/compile-software-in-pristine-environments-with-pbuilder<h2 id="compile-software-in-pristine-environments-with-pbuilder">compile software in pristine environments with pbuilder</h2>
<h6 id="09-nov-2011">09 Nov 2011</h6>
<p>I’m a paranoid of the order (at least in my system), every time I have to install software which is not included in the Ubuntu repositories I create .deb packages for such programs, sometimes it’s easy enough to do it manually (if the program doesn’t envolve a lot of dependencies), other times I rely on <a href="http://asic-linux.com.mx/%7Eizto/checkinstall/">checkinstall</a> or <a href="https://github.com/jordansissel/fpm">fpm</a> to do the job. Some persons wonder why I do still create hand made packages if fpm and checkinstall are available, well I do it because both tools can only create .deb packages but not .dsc definitions (a kind of .deb source package), these .dsc files can be upload to a ppa in <a href="http://launchpad.net/">launchpad.net</a> where they’ll be compiled and saved. You want to do that because then you can use the url of your ppa to get automatic dependency resolution and good download speeds.</p>
<p>When creating such packages (using any method) you’ll still need to download (at least temporally) build dependencies. As an order obsessed person I avoid doing it directly in my system and use chroot environments instead, these chroot boxes are way cheaper than virtualization solutions and faster to setup. Since I already use <a href="https://viajemotu.wordpress.com/2010/08/10/notas-sobre-pbuilder">pbuilder</a> and it has a very nice <strong>–login</strong> option, I use it to create temporal environments and destroy them on exit.</p>
<p>Let’s suppose a new ffmpeg version has just been released and you want to try it on your stable Ubuntu system, these would be the steps necessary to compile, package and install it in your host system.</p>
<pre class="sh_sh">
$ sudo apt-get -y remove ffmpeg x264 libx264-dev libmp3lame-dev
$ sudo pbuilder.natty --login
[natty-chroot] # apt-get install wget
[natty-chroot] # apt-get -y install nasm build-essential git-core \
checkinstall yasm texi2html libfaac-dev libopencore-amrnb-dev \
libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libvorbis-dev \
libx11-dev libxfixes-dev libxvidcore-dev zlib1g-dev
[natty-chroot] # git clone git://git.videolan.org/ffmpeg
[natty-chroot] # cd ffmpeg
[natty-chroot] # ./configure --enable-gpl --enable-version3 --enable-nonfree \
--enable-postproc --enable-libfaac --enable-libopencore-amrnb \
--enable-libopencore-amrwb --enable-libtheora --enable-libvorbis \
--enable-libx264 --enable-libxvid --enable-x11grab --enable-libmp3lame
[natty-chroot] # make
[natty-chroot] # checkinstall --pkgname=ffmpeg --pkgversion="5:$(./version.sh)" \
--backup=no --deldoc=yes --default
$ cp /var/cache/pbuilder/natty-amd64/build/{number}/home/user/ffmpeg.deb ~
$ sudo dpkg -i ffmpeg_5:201111091946-git-1_amd64.deb
</pre>
<p>Once installed, the tmp chroot environment can be destroyed by terminating the session</p>
<pre class="sh_sh">
[natty-chroot] # exit
</pre>
<p>Sweet ü</p>
virtualbox and kvm sideways2011-10-31T00:00:00+00:00http://javier.io/blog/en/2011/10/31/virtualbox-kvm-sideways<h2 id="virtualbox-and-kvm-sideways">virtualbox and kvm sideways</h2>
<h6 id="31-oct-2011">31 Oct 2011</h6>
<p><strong><a href="/assets/img/55.png"><img src="/assets/img/55.png" alt="" /></a></strong></p>
<p>The above image contain a common error persons have whenever they try to use VirtualBox and KVM at the same time. Some forum posts suggest uninstalling kvm, however it’s quite simple to keep both solutions installed sideways.</p>
<p>In Ubuntu, everytime VirtualBox is going to be used, KVM kernel modules should be disabled:</p>
<pre class="sh_sh">
$ sudo service qemu-kvm stop && sudo service vboxdrv start
</pre>
<p>And viceversa:</p>
<pre class="sh_sh">
$ sudo service vboxdrv stop && sudo service qemu-kvm
</pre>
<p>For other distributions, <strong>rmmod/modprobe/lsmod</strong> can do the job 😉</p>
stop firefox directory autocreation2011-10-29T00:00:00+00:00http://javier.io/blog/en/2011/10/29/stop-firefox-directory-autocreation<h2 id="stop-firefox-directory-autocreation">stop firefox directory autocreation</h2>
<h6 id="29-oct-2011">29 Oct 2011</h6>
<p>By default Firefox creates a <strong>Desktop</strong> and <strong>Download</strong> directories in <strong>$HOME</strong> accordying to <a href="http://www.freedesktop.org/wiki/Software/xdg-user-dirs">freedesktop policies</a>. This feature can be annoying for some persons (including me). IMO nobody should force you to use a pre-fixed directory layout.</p>
<p>To disable this feature the <strong>$HOME/.config/user-dirs.dirs</strong> file should be edited as follows:</p>
<pre class="sh_sh">
$ cat $HOME/.config/user-dirs.dirs
XDG_DESKTOP_DIR="$HOME/./"
XDG_DOWNLOAD_DIR="$HOME/./"
XDG_TEMPLATES_DIR="$HOME/./"
</pre>
<p>The Linux desktop specifications are pretty dumb 😔</p>
proxy ssh + socks2011-10-06T00:00:00+00:00http://javier.io/blog/en/2011/10/06/proxy-ssh-socks<h2 id="proxy-ssh--socks">proxy ssh + socks</h2>
<h6 id="06-oct-2011">06 Oct 2011</h6>
<h4 id="problem">Problem</h4>
<ul>
<li>Facebook, Twitter, Youtube, etc are blocked.</li>
</ul>
<h4 id="solution">Solution</h4>
<ul>
<li>Route traffic through ssh tunnels.</li>
</ul>
<h3 id="ingredients">Ingredients</h3>
<ul>
<li>Unix account in an external host, eg; <a href="http://cjb.net">cjb.net</a>, vps, etc</li>
<li>Ssh client</li>
<li>Traffic allowed through the 22 port (or any other port)</li>
</ul>
<h3 id="procedure">Procedure:</h3>
<ul>
<li>Create an ssh tunnel:</li>
</ul>
<pre class="sh_sh">
[local]$ ssh -C2qTnN -D 9090 username@remote.machine
</pre>
<ul>
<li>Configure firefox to use the tunnel:
<ul>
<li>Edit ➮ Preferences ➮ Advanced ➮ Network ➮ Settings ➮ Manual proxy configuration</li>
</ul>
</li>
<li>SOCKS Proxy 127.0.0.1 Port 9090</li>
</ul>
<h3 id="extra">Extra</h3>
<p>To get extra security connections can go through N nodes:</p>
<pre>
Firefox (local) ➟ host-1 ➟ host-2 ➟ host-n -> Internet
</pre>
<pre class="sh_sh">
[local]$ ssh -C2qTnN username@host-1 -L 9090:localhost:9090
[host1]$ ssh -C2qTnN username@host-2 -L 9090:localhost:9090
...
...
[hostn-1]$ ssh -C2qTnN -D 9090 username@host-n
</pre>
<p>Happy hacking 😈</p>
a lenda da cuca2011-09-29T00:00:00+00:00http://javier.io/blog/pt/2011/09/29/a-lenda-da-cuca<h2 id="a-lenda-da-cuca">a lenda da cuca</h2>
<h6 id="29-sep-2011">29 Sep 2011</h6>
<p><strong><img src="/assets/img/74.jpg" alt="" /></strong></p>
<p>A gente fala de uma mulher muito feia com a forma de jacaré, ela rouba as crianças que não obedecem a seus pias. A cuca não pode dormir e é por isso que vai andando pelas noites. Quando ela fica brava dá um berro que dá pra ouvir à 10 léguas de distância.</p>
<p>Os meninos que não queiram ser roubados ter que dormi-se a boa hora.</p>
xperia mini pro - custom roms2011-08-21T00:00:00+00:00http://javier.io/blog/es/2011/08/21/xperia-mini-pro-custom-roms<h2 id="xperia-mini-pro---custom-roms">xperia mini pro - custom roms</h2>
<h6 id="21-aug-2011">21 Aug 2011</h6>
<p>Nada, hace poco me he hecho de un xperia mini pro (por $2800), después de haber perdido un xperia mini =(, y este fin de semana he experimentado hasta dejarlo con una configuración aceptable (a mi gusto), así que dejo un breve resumen de los pasos que seguí para ayudarme en el futuro.</p>
<p>Este tutorial no debería seguirse a menos que se tenga exactamente el mismo teléfono y se desee obtener el mismo resultado.</p>
<p>ANTES</p>
<ul>
<li>Android <strong>1.6</strong> (SonyEricson) / <strong>Donut</strong></li>
<li>versión de banda base <strong>M76XX-TSNCJOLYM-53404006</strong></li>
<li>Modelo: u20a</li>
</ul>
<p><strong><a href="/assets/img/41.png"><img src="/assets/img/41.png" alt="" /></a></strong>
<strong><a href="/assets/img/42.jpeg"><img src="/assets/img/42.jpeg" alt="" /></a></strong>
<strong><a href="/assets/img/43.png"><img src="/assets/img/43.png" alt="" /></a></strong></p>
<p>DESPUÉS</p>
<ul>
<li>Android <strong>2.3.5</strong> (GinTonic.Se 1.3) / <strong>Gingerbread</strong></li>
<li>versión de banda base <strong>M76XX-TSNCJOLYM-53404015</strong></li>
<li>Modelo: u20i</li>
</ul>
<p><strong><a href="/assets/img/44.png"><img src="/assets/img/44.png" alt="" /></a></strong>
<strong><a href="/assets/img/45.png"><img src="/assets/img/45.png" alt="" /></a></strong>
<strong><a href="/assets/img/46.png"><img src="/assets/img/46.png" alt="" /></a></strong></p>
<p>Descarga:</p>
<ul>
<li><strong>build.prop</strong> (útil para cambiar la banda <strong>53404006</strong> -> <strong>53404015</strong>)</li>
<li><strong>root-xrecovery_x10miniPro.exe</strong> (rootea e instala xrecovery, requiere Windows)</li>
<li>
<p><strong>mybackupPro.apk</strong> (backup de contactos, sms, apps)</p>
</li>
<li><a href="http://www.multiupload.com/4W2VJ6URER">http://www.multiupload.com/4W2VJ6URER</a></li>
</ul>
<h3 id="faq">FAQ</h3>
<p><strong>¿Qué es android?</strong></p>
<p>Android es un sistema operativo enfocado a teléfonos y tabletas, principal competencia del sistema operativo que viene en los iphone, esta basado en linux, aunque modifica muchas partes esenciales del mismo. Tiene miles de aplicaciones, gps, reconocimiento de voz, skype, twitter, facebook, youtube, angrybirds, etc.</p>
<p><strong>¿Qué es un custom rom?</strong></p>
<p>Un custom rom es un sistema operativo modificado que reemplaza el que Sony Ericson/Telcel han puesto en tu teléfono.</p>
<p><strong>¿Cúal es la ventaja de un custom rom?</strong></p>
<p>En primer lugar que corre una versión más reciente de android, adquiriendo un aumento en el rendimiento, por ejemplo android 2.2 es dos veces más rapido que android 2.1, y android 2.3 es 25% más rapido que la versión 2.2. También puede hacer otras cosas, por ejemplo mover las aplicaciones a la memoria externa (app2sd) o compartir internet a otros dispositivos a través de usb/wifi (requiere plan de datos).</p>
<p><strong>¿Cuáles son los requerimientos?</strong></p>
<p>Tener instalado xrecovery y haber ‘rooteado’ el teléfono, adicionalmente se puede tener a la mano PC Companion o SEUS (sony ericson update service) para regresar el teléfono a su instalación de fabrica.</p>
<p><strong>¿Qué es ‘rootear’?</strong></p>
<p>Rootear se refiere a obtener permisos de administrador, por defecto los teléfonos traen medidas de seguridad que evitan, por ejemplo, eliminar aplicaciones que vienen por defecto (muestras de juegos, ideas telcel, etc). Con un teléfono rooteado es posible eliminarlas.</p>
<p><strong>¿Qué es xrecovery?</strong></p>
<p>Xrecovery es un programa que se utiliza para instalar custom roms, se accede a el presionando la flecha <- al arranque del movil (cuando aparece la leyenda ‘sony ericson’)</p>
<p><strong>¿Cómo se instala un custom rom?</strong></p>
<p>Los custom rom se distribuyen como archivos zip (1 archivo zip por rom), y se instalan con xrecovery</p>
<p><strong>¿Qué implicaciones legales conlleva instalar un custom rom?</strong></p>
<p>Se pierde la garantía, sin embargo el proceso puede ser revertido si es que en algún momento se requiere reinstalar la versión con la que venía el teléfono.</p>
<h3 id="backup">Backup</h3>
<p>Si el teléfono ya estaba siendo utilizado con anterioridad, es probable que haya datos que no se deseen perder, en el market existen aplicaciones que ayudan a hacer esto, adicionalmente de los datos que se sincronizan con la cuenta de google que se definio en el teléfono.</p>
<p>Personalmente utilice <strong>mybackup pro</strong> (sms’s) y <strong>titanium backup</strong> (aplicaciones), mybackup pro viene incluido en el link de descarga, titanium puede adquirirse desde el market.</p>
<p>Para instalar aplicaciones fuera del market, hay que habilitar <strong>‘Origenes Desconocidos’</strong></p>
<ul>
<li>Ajustes/Aplicaciones/Origenes Desconocidos</li>
</ul>
<h3 id="rootear-e-instalar-xrecovery">Rootear e instalar xrecovery</h3>
<p>Existen métodos y aplicaciones que otorgan permisos e instalan xrecovery, pero el más simple que encontré fue usar 1click:</p>
<ul>
<li>root-xrecovery_x10miniPro.exe (viene incluido en el zip)</li>
</ul>
<p>Para usarlo se siguen los siguientes pasos:</p>
<ol>
<li>Preparar el teléfono, habilitar la “depuración USB” en <strong>ajustes/aplicaciones/desarrollo</strong></li>
<li>Extraer el archivo (yo lo hice con WindRar)</li>
<li>Conectar el teléfono a la computadora</li>
<li>Abrir oneclickroot</li>
<li>Hacer click en ‘root’ y esperar que finalice (no debe demorar mas de 2 min)</li>
<li>Hacer click en ‘xrecover’ y esperar otro par de minutos</li>
<li>Reiniciar</li>
</ol>
<p>Ahora el teléfono debería estar rooteado y tener xrecovery (se puede probar presionando la flecha al arranque - aparecerá un menú)</p>
<h3 id="cambiar-banda-base">Cambiar banda base</h3>
<p>Muchos de los custom roms solo son compatibles con la versión 53404015, por lo que se tiene que actualizar a esa versión antes de volcar los roms, de lo contrario el teléfono entrará en un ciclo de reinicio infinito.</p>
<p>Para eso se copia</p>
<ul>
<li><strong>build.prop</strong> a <strong>/system</strong></li>
</ul>
<p>Lo que requiere un navegador de archivos con permisos de administrador, se puede usar <strong>‘File Expert’</strong> (accesible a través del market), aunque por defecto no esos permisos, sin embargo se puede habilitar en:</p>
<ul>
<li>File Expert/menu/mas/ajustes/ajustes de explorador de archivos/<strong>explorador root</strong></li>
</ul>
<p>Se reinicia el programa y se reemplaza <strong>build.prop</strong> por <strong>/system</strong></p>
<p>Después de reiniciar, se conecta a una computadora, donde este corriendo PC Companion (viene incluido en el teléfono), y desde ahí se reestablece al setup de fábrica, el tiempo aproximado dependiendo de la conexión es de 40min (a 100kb/s).</p>
<p>Después de reiniciar el teléfono la banda base debería ser la <strong>53404015</strong>. Si es así se siguen los pasos de “<strong>Rootear e instalar xrecovery</strong>” para reinstalar los componentes necesarios.</p>
<h3 id="instalar-custom-roms">Instalar custom roms</h3>
<p>Es recomendable hacer un backup del rom de fábrica, de esta manera si no convence el custom rom, se puede recuperar el sistema original, para probar con otro.</p>
<p>Para crear un backup se reinicia el teléfono, y se presiona la tecla que tiene una flecha, aparecerá un menú, ahí se selecciona (se mueve con las teclas de volúmen, y se selecciona con la tecla de en medio)</p>
<ul>
<li>backup and restore/<strong>Backup</strong></li>
</ul>
<p>Eso creará una copia de seguridad en <strong>/sdcard/xrecovery</strong>, para recuperarla se selecciona:</p>
<ul>
<li>backup and restore/<strong>restore</strong></li>
</ul>
<p>Algunos de los custom roms disponibles para el xperia mini pro, estan listados aquí:</p>
<ul>
<li><a href="http://tinyurl.com/3smura5">http://tinyurl.com/3smura5</a></li>
</ul>
<table>
<tbody>
<tr>
<td>Después de probar algunos, he decidido usar de forma continua **GinTonic.SE</td>
<td>v1.3** (120MB~), disponible en:</td>
</tr>
</tbody>
</table>
<ul>
<li><a href="http://forum.xda-developers.com/showthread.php?t=1207740">http://forum.xda-developers.com/showthread.php?t=1207740</a></li>
</ul>
<p>La instalación de este y de otros, pasa por:</p>
<ol>
<li>Reiniciar el telefono</li>
<li>Entrar en xrecovery (presionar varias veces la tecla de la fecla derecha</li>
<li>Ir a <strong>install zip from sdcard/choose zip from sdcard</strong> y seleccionar el archivo zip</li>
<li>Esperar la instalación</li>
<li>Ir a <strong>wipe data/factory reset</strong></li>
<li>Reiniciar</li>
</ol>
<p>Con eso, arrancará el nuevo SO, se llenará el perfil (cuenta de gmail), se instalará mybackupPro/titanium y se recuperaran los datos/aplicaciones.</p>
<p>Estos custom roms ya viene rooteados y con xrecovery, así que no es necesario volver a correr 1click por tercera vez.</p>
<h3 id="extra-intermedio">EXTRA (intermedio)</h3>
<p>Instalar (desde el market)</p>
<ul>
<li>Taskiller</li>
<li>AdFree</li>
</ul>
<p>Liberar espacio</p>
<p>Para mover las aplicaciones a la tarjeta externa se va a:</p>
<ul>
<li>
<p><strong>menu/administrar aplicaciones/descargadas</strong>, luego se selecciona la aplicación y se presiona <strong>mover a sdcard</strong>, libera aprox el 50% del tamaño de la app.</p>
</li>
<li>
<p>Otro truco es eliminar <strong>/data/dalvik-cache</strong> y reiniciar (libera 13~ MB)</p>
</li>
<li>
<p>O eliminar la cache de los programas en <strong>/data/data</strong>, para saber cual app guarda más datos, se puede abrir una consola (viene incluida con el rom) y se escribe:</p>
</li>
</ul>
<pre class="sh_sh">
$ du -sk /data/data/* | sort -rn | head
</pre>
<p>Y luego:</p>
<pre class="sh_sh">
$ rm -rf /data/data/some_app/cache
</pre>
<p>También se puede mover las caches de algunos programas a otros lugares, para eso se hace un softlink</p>
<pre class="sh_sh">
$ mkdir -p /sdcard/cache/market
$ cd /data/data/com.android.vending
$ rm -R cache
$ ln -s /sdcard/cache/market cache
</pre>
causa e efeito2011-07-02T00:00:00+00:00http://javier.io/blog/pt/2011/07/02/causa-e-efeito<h2 id="causa-e-efeito">causa e efeito</h2>
<h6 id="02-jul-2011">02 Jul 2011</h6>
<p>Gosto muito das cancoes de luta social</p>
<div id="youtube">
<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/L_gcV8O5Zl8?hl=en_US&version=3" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/L_gcV8O5Zl8?hl=en_US&version=3" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</center>
</div>
não tenho dinheiro2011-05-20T00:00:00+00:00http://javier.io/blog/pt/2011/05/20/nao-tenho-dinheiro<h2 id="não-tenho-dinheiro">não tenho dinheiro</h2>
<h6 id="20-may-2011">20 May 2011</h6>
<p>Eu tambem não! =)</p>
<div id="youtube">
<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/zdCF5Uknu-8?hl=en_US&version=3" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/zdCF5Uknu-8?hl=en_US&version=3" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<p></p>
<pre class="lyric">
Vou, pela rua, caminhando mas pensando em meu amor
e vou, decorando coisas serias que preciso-lhe dizer
porque amo muito mas entendo que tao só de mi esperar
e eu, não sou nada sou tao pobe que nao posso-me casar
Não tenho dinheiro nem nada que dar, eu tenho solamente
amor para amar, si ela me ama vai compreender
si não para sempre vou ter esquecer
Não tenho dinheiro nem nada que dar, eu tenho solamente
amor para amar, si ela me ama vai compreender
si não para sempre vou ter esquecer
Eu sei, que ao seu lado amo muito e me sinto tao feliz
e sei que adiz cerqui sou tao pobe seu amor já perdi
entao, eu queria ter de todo por o mundo a seus pes
Mas eu naci pobre e é por isso que sempre ninguem me quer
Não tenho dinheiro nem nada que dar eu tenho solamente
amor para amar, si ela me ama vai compreender
si não para sempre vou ter esquecer
Não tenho dinheiro nem nada que dar eu tenho solamente
amor para amar, si ela me ama vai compreender
si não para sempre vou ter esquecer
Não tenho dinheiro nem nada que dar eu tenho solamente
amor para amar, si ela me ama vai compreender
si não para sempre pote diz-que ser
</pre>
<p></p>
watch_battery2011-04-07T00:00:00+00:00http://javier.io/blog/en/2011/04/07/watch-battery<h2 id="watch_battery">watch_battery</h2>
<h6 id="07-apr-2011">07 Apr 2011</h6>
<p><strong><a href="/assets/img/40.png"><img src="/assets/img/40.png" alt="" /></a></strong></p>
<p>I made a little <a href="https://github.com/minos-org/minos-tools/blob/master/tools/watch-battery">script</a> to look after my laptop battery so it doesn’t shutdown at the middle of me working. It requieres <strong>notify-send</strong>, <strong>hibernate</strong> and <strong>acpi,</strong> and targets Ubuntu:</p>
<pre class="sh_sh">
$ sudo apt-get install acpi libnotify-bin hibernate
</pre>
<p><strong>WARNING:</strong> For the hibernation to work the computer requires to have enough SWAP space (more than the amount of RAM)</p>
<p>The scripts analyze the battery status and send notifications if the charge is less than 15%,10% or 7%, if the equipment reaches 5% it sends a final warning and hibernate the machine.</p>
<p>I recommend to execute it every minute, a cron job can help:</p>
<pre class="sh_log">
*/1 * * * * /usr/local/bin/watch_battery
</pre>
<p>If you prefer to shutdown or suspend the equipment modify the <strong>$ACTION</strong> variable:</p>
<pre class="sh_sh">
# Actions
ACTION="$(command -v hibernate)"
</pre>
<p>Make sure <strong>sudo</strong> can call the action command without requering password:</p>
<pre class="sh_properties">
#===================================
# Cmnd alias specification
Cmnd_Alias SESSION=/usr/sbin/pm-suspend,/usr/sbin/hibernate,/sbin/shutdown
# usuario may use specific commands without passwd
user ALL=(root) NOPASSWD:SESSION
#===================================
</pre>
<p>Special thanks to <a href="http://forums.debian.net/viewtopic.php?f=8&t=52115#p299406">smasty</a> for the initial snippet.</p>
<ul>
<li><a href="https://gist.github.com/913004">https://gist.github.com/913004</a></li>
</ul>
don't let cd slow you down, cd wrappers: wcd, commacd2011-04-05T00:00:00+00:00http://javier.io/blog/en/2011/04/05/dont-let-cd-slow-you-down-wcd-commacd<h2 id="dont-let-cd-slow-you-down-cd-wrappers-wcd-commacd">don’t let cd slow you down, cd wrappers: wcd, commacd</h2>
<h6 id="05-apr-2011">05 Apr 2011</h6>
<!--<iframe class="showterm" src="http://showterm.io/ae29f68bee555cd89c65d" width="640" height="350"> </iframe>-->
<p>Using a console interface to manage a computer has its disadvantages, some of them are specially visible when dealing with multiple files at the same time (moving/renaming/copying), typing long and crypted commands/options or moving around. On this entry I’ll talk about the last one.</p>
<p>The default <code class="language-plaintext highlighter-rouge">cd</code> behaviour on bash to change directories is quite strict, it requires to write full/relative paths and doesn’t recognize fuzzy search or any more complicated way of finding directories, some people use many <a href="https://github.com/relevance/etc/blob/master/bash/project_aliases.sh">aliases</a> to workaround these issues, others change its default shell or use third party tools, eg: <a href="https://github.com/clvv/fasd">fasd</a>, <a href="https://github.com/vigneshwaranr/bd">bd</a>, <a href="https://github.com/junegunn/fzf">fzf</a>, etc to create a semi automatic way of moving faster.</p>
<p>I prefer everything as minimalist and automatic as possible, so after reviewing lots of custom scripts, alternative shells and third party hacks, I think I’ve something good enough to write about and use on a daily basis.</p>
<p>All start by enabling semi hidden bash options for autocorrecting and autocompleting directories:</p>
<pre class="sh_sh">
$ head ~/.bashrc
# http://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html
if [ "${BASH_VERSINFO}" -ge "4" ]; then
shopt -s autocd cdspell dirspell
fi
</pre>
<p>Now, bash can autocomplete <strong>1/2/3/foo</strong> in the following scenarios:</p>
<pre class="sh_sh">
$ cd 1/2/3/foo
$ cd 1/2/3/ofo
$ 1/2/3/foo
</pre>
<p>Moving between important directories and parent directories can be optimized by adding some aliases:</p>
<pre class="sh_sh">
$ head ~/.alias.common
alias ..="cd .."
alias ....="cd ../.."
alias important.path="cd important/path"
</pre>
<p>Now it’s time for some major improvements. An important <a href="http://www.zsh.org">zsh</a> cd related feature is called pattern recognition, e.g. <strong>$ cd s<em>/m</em>/pl</strong> will become <strong>super/master/plan</strong>, that’s sweet, unfortunately bash is unable to recognize such patterns by itself, however with some <a href="http://wcd.sourceforge.net/">help</a> it can do it even better.</p>
<pre class="sh_sh">
$ sudo apt-get install wcd
$ head ~/.alias.common
alias cd='. wcd'
</pre>
<p><strong>wcd</strong> is not a binary, it’s a wrapper script around <code class="language-plaintext highlighter-rouge">wcd.exec</code> (available on the <code class="language-plaintext highlighter-rouge">wcd</code> package):</p>
<ul>
<li><a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/wcd">https://github.com/javier-lopez/learn/blob/master/sh/tools/wcd</a></li>
</ul>
<p>Once installed and configured, <strong>$ cd s<em>/m</em>/pl</strong> will take us to <strong>super/master/plan</strong> no matter what the current is, <a href="http://wcd.sourceforge.net/">wcd</a> works by creating an index file with all available directories and looking at it to find the best approximation.</p>
<p><strong>WARNING:</strong> Wcd will require to regenerate the index db every now and then, a cronjob with the following content can help:</p>
<pre class="sh_sh">
0 23 * * * /usr/local/bin/update-cd
</pre>
<pre class="sh_sh">
$ cat /usr/local/bin/update-cd
#!/bin/sh
#description: update wcd db if available
#usage: update-cd
if [ -f "$(command -v "wcd")" ] && [ -f "$(command -v "wcd.exec")" ]; then
mkdir "${HOME}"/.wcd; wcd.exec -GN -j -xf "${HOME}"/.ban.wcd -S "${HOME}"
[ -f "${HOME}"/.treedata.wcd ] && mv "${HOME}"/.treedata.wcd "${HOME}"/.wcd/
fi
</pre>
<p>Been able to move to any directory from anywhere is really helpful, however sometimes it’s desirable to move around parents and nearby directories efficiently, that’s where <a href="https://github.com/shyiko/commacd">commacd</a> get in. With <code class="language-plaintext highlighter-rouge">commacd</code> several aliases (<code class="language-plaintext highlighter-rouge">,</code>, <code class="language-plaintext highlighter-rouge">,,</code> and <code class="language-plaintext highlighter-rouge">,,,</code>) are defined which can be used on the following scenarios:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ , /u/l/b #moving through multiple directories
=> cd /usr/local/bin
$ , d #moving through multiple directories with the same name
=> 1 Desktop
2 Downloads
: <type index of the directory to cd into>
~/code/projects/zion/src/module $ ,, #going up till a project directory is found (git/hg/svn based)
=> cd ~/code/projects/zion
~/code/projects/zion/src/module $ ,, pro #going into the first parent directory named pro*
=> cd ~/code/projects
~/code/projects/zion/src/module $ ,, zion matrix #subtituing the current path and going into the result
=> cd ~/code/projects/matrix/src/module
~/code/projects/zion/src/module $ ,,, matrix/tests #going into a sibling directory who has the same parent directory
=> cd ~/code/projects/matrix/tests/
</code></pre></div></div>
<p>As wcd, <code class="language-plaintext highlighter-rouge">commacd</code> is a script who can be downloaded from:</p>
<ul>
<li><a href="https://raw.githubusercontent.com/shyiko/commacd/master/commacd.bash">https://raw.githubusercontent.com/shyiko/commacd/master/commacd.bash</a></li>
</ul>
<p>Or to get my personal version:</p>
<ul>
<li><a href="https://raw.githubusercontent.com/javier-lopez/learn/master/sh/tools/commacd">https://raw.githubusercontent.com/javier-lopez/learn/master/sh/tools/commacd</a></li>
</ul>
<p>Upon getting any of them, the script should be used as an alias (due to the nature of the <code class="language-plaintext highlighter-rouge">cd</code> built-in), eg, <strong>~/bashrc</strong>:</p>
<pre class="sh_sh">
if [ -f "$(command -v "commacd")" ]; then
. commacd
alias ,=_commacd_forward
alias ,,=_commacd_backward
alias ,,,=_commacd_backward_forward
fi
</pre>
<p>That’s it, now moving around should feel less archaic, happy cli browsing 😄</p>
lint2011-02-07T00:00:00+00:00http://javier.io/blog/en/2011/02/07/lint<h2 id="lint">lint</h2>
<h6 id="07-feb-2011">07 Feb 2011</h6>
<p>My belly button has always produced lint, it’s a white, soft, and warm guest, she lives there, sometimes at night we talk.., and I realize we like the same things, we ask the same questions and sometimes we even get the same conclusions. This make me think about the possibility of had been a little lint in the belly button of someone else in a previous life.</p>
share connection between personal computers2010-12-14T00:00:00+00:00http://javier.io/blog/en/2010/12/14/share-connection-between-personal-computers<h2 id="share-connection-between-personal-computers">share connection between personal computers</h2>
<h6 id="14-dec-2010">14 Dec 2010</h6>
<h3 id="wireless-to-wired">Wireless to wired</h3>
<ul>
<li><strong>eth0:</strong> wired link to other machine</li>
<li><strong>eth1:</strong> wireless link to internet</li>
</ul>
<pre class="sh_sh">
$ sudo ifconfig eth0 10.0.0.1
$ sudo iptables -F
$ sudo iptables -X
$ sudo iptables -t nat -F
$ sudo iptables -t nat -X
$ sudo iptables -t mangle -F
$ sudo iptables -t mangle -X
$ sudo iptables -P INPUT ACCEPT
$ sudo iptables -P FORWARD ACCEPT
$ sudo iptables -P OUTPUT ACCEPT
$ sudo iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
$ echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
</pre>
<h3 id="wired-to-wireless">Wired to wireless</h3>
<ul>
<li><strong>eth0:</strong> wired link to internet</li>
<li><strong>eth1:</strong> wireless interface as access point in ad-hoc mode</li>
</ul>
<pre class="sh_sh">
$ sudo iwconfig wlan0 mode ad-hoc
$ sudo iwconfig wlan0 essid proxywlan
$ sudo ifconfig wlan0 10.0.0.1 up
$ sudo iptables -F
$ sudo iptables -X
$ sudo iptables -t nat -F
$ sudo iptables -t nat -X
$ sudo iptables -t mangle -F
$ sudo iptables -t mangle -X
$ sudo iptables -P INPUT ACCEPT
$ sudo iptables -P FORWARD ACCEPT
$ sudo iptables -P OUTPUT ACCEPT
$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$ echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
</pre>
<p>After completing any of the previous steps (and if no dhcp daemon has been set up) the client machine will require to be configured manually, eg:</p>
<ul>
<li><strong>ip:</strong> 10.0.0.2</li>
<li><strong>gateway:</strong> 10.0.0.1</li>
<li><strong>dns:</strong> 8.8.8.8</li>
</ul>
<p>Otherwise you can run a <a href="https://raw.githubusercontent.com/javier-lopez/learn/master/python/tools/simple-dhcpd">simple dhcpd</a> daemon:</p>
<pre class="sh_sh">
$ sudo simple-dhcpd -i eth0 -a 10.0.0.1
</pre>
wire and wireless concurrent connections with wicd2010-12-07T00:00:00+00:00http://javier.io/blog/en/2010/12/07/wire-wireless-concurrent-connections-with-wicd<h2 id="wire-and-wireless-concurrent-connections-with-wicd">wire and wireless concurrent connections with wicd</h2>
<h6 id="07-dec-2010">07 Dec 2010</h6>
<p>Even when it’s not possible to configure two concurrent connections from within <a href="http://wicd.sourceforge.net">wicd</a> it can be tricked to do so. To do this the <strong>/etc/network/interfaces</strong> file must be edited with the wired interface details, eg:</p>
<pre class="sh_sh">
$ cat /etc/network/interfaces
auto eth0
iface eth0 inet static
address 10.0.0.1
netmask 255.255.255.0
network 10.0.0.0
broadcast 10.0.0.255
</pre>
<p>Afterwards, the wired interface will need to be removed from wicd <strong>properties</strong>. That’s it!, now the wireless interface can be controled from wicd and the wired one through <strong>ifup</strong>/<strong>ifdown</strong> 😇</p>
<ul>
<li><a href="https://bugs.launchpad.net/wicd/+bug/228578">https://bugs.launchpad.net/wicd/+bug/228578</a></li>
</ul>
send emails from terminal2010-11-28T00:00:00+00:00http://javier.io/blog/en/2010/11/28/send-emails-from-terminal<h2 id="send-emails-from-terminal">send emails from terminal</h2>
<h6 id="28-nov-2010">28 Nov 2010</h6>
<p>In some systems the command <strong>mail</strong> is installed by default and as its name
suggests it’s used to send/read emails (usually between users of the same
system), for this to work a mail server needs to be installed locally, I’m not
sure about others but it sounds like a lot of work for me. I just want a light
client from where I could send some emails. After searching on Internet I came
to <a href="http://caspian.dotconf.net/menu/Software/SendEmail/">Sendemail</a>, a perl
script who connects to external smtp servers and use them to deliver messages:</p>
<pre class="sh_sh">
$ sudo apt-get install -y sendemail libio-socket-ssl-perl libnet-ssleay-perl
</pre>
<pre class="sh_sh">
$ sendemail -f from@foo.org \
-u title -m message \
-t to@bar.com \
-s mail.foo.com:26 \
-xu user -xp password
</pre>
<p>Or with gmail:</p>
<pre class="sh_sh">
$ sendemail -f from@foo.org \
-u title -m message \
-t to@bar.com \
-s smtp.gmail.com:2587 \
-o tls=yes -xu user -xp password
</pre>
<p>Now, since gmail blocks hosts per ip, it sometimes doesn’t work when it used
from new locations, it can be very annoying. Fortunately, there are other ways
to send emails within a system, my favorite method is to use
<a href="http://mailgun.com">http://mailgun.com</a>. When using mailgun you only need an
account in such service and <strong>curl</strong> installed in your system. I’ve created a
script who wraps the required logic and just send emails.</p>
<pre class="sh_sh">
$ wget https://raw.github.com/javier-lopez/learn/master/sh/tools/mailgun
$ sh mailgun --api xxx "address@to.com" "message"
</pre>
<!--<iframe class="showterm" src="http://showterm.io/6d595bb4e5424b943e54f" width="640" height="300"> </iframe>-->
<p></p>
calavera2010-10-28T00:00:00+00:00http://javier.io/blog/es/2010/10/28/calavera<h2 id="calavera">calavera</h2>
<h6 id="28-oct-2010">28 Oct 2010</h6>
<p>Nada, una calavera de rimas fáciles para la banda extinta de #fel-clan</p>
<pre class="lyric">
Andaba Catrina muy apurada
recorría freenode sin cesar
buscaba un grupo, uno singular
headshot al #fel-clan venia a dar
Pedradas, monólogos en gringo
batallas de emoticons de a montón
waza, y soporte sin distinción
Después de muchos /joins tirar
al canal del team pudo entrar
Hrgn fue al único que encontró
lanzo una granada y el susodicho
sin ENLI y sin cabeza se quedó
Idle entonces espero
a linuxeros, macqueros y windoceros
no importaba pues traía
paz paz para cualquiera
Chanserv le dio @ sin preguntar
uno a uno el averno iban a visitar
Thot entró y esta vez voz no le faltó
gritos dicen que se escucharon
cuando la Catrina desfundó su arma
y los dientes le tumbó
Uno menos se dijo riendo
y enseguida se puso a pensar
le hubiera preguntado cómo
el openBSD actualizar
La noche se acercaba
y la calaca desesperaba
uno, dos, cuatro, entraron de jalón
y a todos se los despachó
Leoi, avento a Ceotz al frente
corrió, saltó y disparó
entonces la máquina se le trabó
y en el acto se lo llevó
Jackrock creyendo que la tenía
disparo con furor
pero quedando out of army
la muerte del hi-skill lo eliminó
Centinela, al ver a todos caer
las paces intento hacer
y cuando en sus manos la tenía
un disparo certero lanzó
mal hizo el chaval, pues la Catrina
un pase VIP al infierno le dió
Más tarde que temprano
Azimov apareció
y viendo todo el tiradero
la submachine gun sacó
En seguida la muerte se lanzó
tres cartuchos descargó
de pies ligeros, ni una bala lo tocó
y a la cuarta, un Chilicuil lo salvó
Chilicuil quedo tendido
ni tiempo le dio de decir OMG
el karma ya le traía ganas
y juntas, todas sus cuentas se las cobró
La muerte lo intento todo
y Azimov le dio batalla
usando hacks no sabía
puedes evadir hasta el lodo
Más en un descuido
Azimov se torció el tobillo
cayo y se enterró su cuchillo
El AA llego, Asarch y Akemi juntitos
a la muerte se le antojaron romeritos
Asarch buzo caperuzo saco el recurso
mendigo misilazo acabo con el intruso
Akemi que no se enteró
piedras, lanzo y lanzo
y al pobre Asarch dio muerte
la única superviviente
</pre>
<p></p>
ipod shuffle and rebuild_db2010-10-18T00:00:00+00:00http://javier.io/blog/en/2010/10/18/ipod-shuffle-rebuild-db<h2 id="ipod-shuffle-and-rebuild_db">ipod shuffle and rebuild_db</h2>
<h6 id="18-oct-2010">18 Oct 2010</h6>
<p>Some time ago I bought an ipod shuffle and as with all apple products it turned out to be somekind difficult to use on Linux. Rythmbox and gtkpod sometimes do weird stuff, so I keept testing projects till I found <a href="http://shuffle-db.sourceforge.net/">rebuild_db</a>, a minimalist program who just get the job done ☃</p>
<p>After connecting and mounting the device (it only works with shuffle models) you can create any directory (I call it ‘music’) and copy there all your tracks. At the end before unmounting just execute the program and it would rearrange the files and will create the expected structure. I’m now a happy ipod shuffle user 😎</p>
i3 and xinerama2010-07-23T00:00:00+00:00http://javier.io/blog/en/2010/07/23/i3-xinerama<h2 id="i3-and-xinerama">i3 and xinerama</h2>
<h6 id="23-jul-2010">23 Jul 2010</h6>
<p>**Update Feb/2014: ** I’ve created a <a href="https://github.com/minos-org/minos-tools/blob/master/tools/dmenu-xrandr">dmenu script</a> who lists and configures automatically available monitors.</p>
<p>One of the main reasons I decided to stick with <a href="http://i3-wm.org">i3-wm</a> over <a href="http://wmii.suckless.org/">wmii</a> was the improved <a href="http://en.wikipedia.org/wiki/Xinerama">xinerama</a> support, that is the hability to use several screens at the same time, can you believe there are some window managers who don’t do it?, me neither 😳</p>
<p>For example, if I were to clone the video output from my laptop screen to a video projector I could use:</p>
<pre class="sh_sh">
$ xrandr --output VGA1 --mode 1024x768 --same-as LVDS1
</pre>
<p>Or if were to extend my virtual work place, I could execute:</p>
<pre class="sh_sh">
$ xrandr --output VGA1 --mode 1024x768 --right-of LVDS1
</pre>
<p><strong>xrandr</strong> accepts plenty of options, refer to the man page for more details.</p>
<ul>
<li><a href="https://wiki.ubuntu.com/X/Config/Resolution">https://wiki.ubuntu.com/X/Config/Resolution</a></li>
</ul>
gtk and java2010-07-13T00:00:00+00:00http://javier.io/blog/en/2010/07/13/gtk-java<h2 id="gtk-and-java">gtk and java</h2>
<h6 id="13-jul-2010">13 Jul 2010</h6>
<p>One of the main reasons I don’t like java is how their applications looks, whenever I open a java program I’ve the sensation I’ve gone back to the past 10-15 years. Nevertheless some of those applications can be forced to use gtk when available:</p>
<pre class="sh_sh">
export _JAVA_OPTIONS="-Dawt.useSystemAAFontSettings=on -Dswing.defaultlaf=com.sun.java.swing.plaf.gtk.GTKLookAndFeel"
</pre>
<p>It can also be permanently set by copying it to <strong>~/.bashrc</strong> or your favorite shell configuration file.</p>
<ul>
<li><a href="http://blogs.sun.com/netbeansphp/entry/how_to_change_look_and">http://blogs.sun.com/netbeansphp/entry/how_to_change_look_and</a></li>
</ul>
fu-search2010-07-11T00:00:00+00:00http://javier.io/blog/en/2010/07/11/fu-search<h2 id="fu-search">fu-search</h2>
<h6 id="11-jul-2010">11 Jul 2010</h6>
<p><a href="http://commandlinefu.com">commandlinefu</a> is a popular website where one-liners for unix environments are posted. It’s a nice place to learn by example, look for ideas or just to spend time. I liked so much than I decided to build my own client:</p>
<!--**[![](/assets/img/37.png)](/assets/img/37.png)**-->
<!--**[![](/assets/img/38.png)](/assets/img/38.png)**-->
<p><strong><a href="/assets/img/38.png"><img src="/assets/img/39.png" alt="" /></a></strong>
<!--<iframe class="showterm" src="http://showterm.io/e46d37e655b72730db834" width="640" height="300"> </iframe>--></p>
<p>Feel free to grab it at: <a href="https://github.com/javier-lopez/learn/blob/master/sh/tools/fu-search">https://github.com/javier-lopez/learn/blob/master/sh/tools/fu-search</a></p>
i32010-06-16T00:00:00+00:00http://javier.io/blog/en/2010/06/16/i3<h2 id="i3">i3</h2>
<h6 id="16-jun-2010">16 Jun 2010</h6>
<p>I’ve been using <a href="http://i3wm.org">i3-wm</a> for a year, it’s fast, configurable and l33t, so I don’t have intentions to change it. Nevertheless I decided to grab a more recent version (< 4), add some <a href="http://i3wm.org">patches</a> and freeze it. This version is intend to be used in all supported Ubuntu LTS versions. Feel free to install it as well.</p>
<pre class="sh_sh">
$ sudo add-apt-repository ppa:minos-archive/main
$ sudo apt-get update && sudo apt-get install i3-wm
</pre>
<p>For completion, this is the <a href="https://github.com/javier-lopez/dotfiles/blob/master/.i3/config.4">~/.i3/config</a> configuration file I use and how it looks</p>
<p><a href="/assets/img/minos-movie.png"><img src="/assets/img/minos-movie.png" alt="" /></a></p>
<p>Happy tiling 😊</p>
synergy2010-06-14T00:00:00+00:00http://javier.io/blog/en/2010/06/14/synergy<h2 id="synergy">synergy</h2>
<h6 id="14-jun-2010">14 Jun 2010</h6>
<p><a href="http://synergy-foss.org">Synergy</a> is a program who unifies keyboard/mouse/clipboard between several machines (even running different OS), according to its website it supports Windows, OSX and Unix (requires x11 and xtest).</p>
<div id="youtube">
<object width="662" height="491"><param name="movie" value="http://www.youtube.com/v/4wkJx9Ozfu8?version=3&hl=en_US" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/4wkJx9Ozfu8?version=3&hl=en_US" type="application/x-shockwave-flash" width="662" height="491" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<p>In Ubuntu it can be installed from the official repositories (both, client and server):</p>
<pre class="sh_sh">
$ sudo apt-get install -y synergy
</pre>
<p>The server part bind to the <strong>24800</strong> port, so it doesn’t require special permissions:</p>
<pre class="sh_sh">
$ synergys
</pre>
<p>The clients can start and connect to the server ip by executing:</p>
<pre class="sh_sh">
$ synergyc --daemon server-ip
</pre>
<p>By default, synaptics doesn’t encrypt the link, a man in the middle could review the packets and see what the clients are writing. To avoid this problem, a ssh tunnel can be deployed:</p>
<pre class="sh_sh">
$ ssh -f -N -L 24800:localhost:24800 server-ip
$ synergyc --daemon localhost
</pre>
<p>Have fun 😋</p>
configure WPA/WEP in openbsd2010-05-25T00:00:00+00:00http://javier.io/blog/en/2010/05/25/wpa-wep-obsd<h2 id="configure-wpawep-in-openbsd">configure WPA/WEP in openbsd</h2>
<h6 id="25-may-2010">25 May 2010</h6>
<h3 id="wpa---static-ip">WPA - static ip</h3>
<p>Since openbsd 4.4 (4.5 for ath0) it’s possible to connect to wpa networks, it doesn’t work with all drivers but eventually it should be viable with most of them.</p>
<pre class="sh_sh">
$ ifconfig ath0 nwid ACCESS_POINT wpa wpapsk $(wpa-psk ACCESS_POINT PASSWORD)
$ ifconfig ath0 10.0.0.2 255.255.255.0 10.0.0.1
</pre>
<p>It can also be configured in <strong>/etc/hostname.ath0</strong> for connecting at boot time:</p>
<pre class="sh_sh">
$ cat /etc/hostname.ath0
inet 10.0.0.2 255.255.255.0 10.0.0.255 nwid ACCESS_POINT wpa wpapsk \
0xc7bd82ef64a789369e18d6df63230a3b099f72a74b999bdbe837773e6081cb54
</pre>
<p>The last parameter is taken from <strong>$ wpa-psk ACCESS_POINT PASSWORD</strong></p>
<h3 id="wpa---dinamic-ip">WPA - dinamic ip</h3>
<pre class="sh_sh">
$ ifconfig ath0 nwid ACCESS_POINT wpa wpapsk $(wpa-psk ACCESS_POINT PASSWORD)
$ dhclient ath0
</pre>
<p><strong>/etc/hostname.ath0</strong>:</p>
<pre class="sh_sh">
$ cat /etc/hostname.ath0
dhcp nwid ACCESS_POINT wpa wpapsk \
0xc7bd82ef64a789369e18d6df63230a3b099f72a74b999bdbe837773e6081cb54
</pre>
<h3 id="wep---static-ip">WEP - static ip</h3>
<pre class="sh_sh">
$ ifconfig ath0 nwid ACCESS_POINT nwkey 0xPASSWORD
$ ifconfig ath0 10.0.0.2 255.255.255.0 10.0.0.1
</pre>
<p><strong>/etc/hostname.ath0</strong>:</p>
<pre class="sh_sh">
$ cat /etc/hostname.ath0
inet 10.0.0.2 255.255.255.0 10.0.0.255 nwid ACCESS_POINT nwkey 0xPASSWORD
</pre>
<h3 id="wep---dinamic-ip">WEP - dinamic ip</h3>
<pre class="sh_sh">
$ ifconfig ath0 nwid ACCESS_POINT nwkey 0xPASSWORD
$ dhclient ath0
</pre>
<p><strong>/etc/hostname.ath0</strong>:</p>
<pre class="sh_sh">
$ cat /etc/hostname.ath0
dhcp nwid ACESS_POINT nwkey 0xPASSWORD
</pre>
<p>The same commands can be used from the installer (by using ! as prefix)</p>
install ubuntu from the windows boot loader2010-05-19T00:00:00+00:00http://javier.io/blog/en/2010/05/19/ubuntu-installation-from-windows-boot-loader<h2 id="install-ubuntu-from-the-windows-boot-loader">install ubuntu from the windows boot loader</h2>
<h6 id="19-may-2010">19 May 2010</h6>
<p>I got a new netbook some weeks ago, I tested it fully for a month to verify the hardware didn’t have any defects, and decided to move on with the recent Ubuntu 10.04 release.</p>
<p>The machine came pre-installed with Windows and I didn’t have any cd/usb available so I decided to use the system itself to install Ubuntu. The first step was to download <a href="http://grub4dos.sourceforge.net/">grub4dos</a>, uncompress it and copy <strong>grldr</strong> and menu.lst (grub loader) to <strong>C:</strong></p>
<p><strong><a href="/assets/img/26.png"><img src="/assets/img/26.png" alt="" /></a></strong>
<strong><a href="/assets/img/27.png"><img src="/assets/img/27.png" alt="" /></a></strong></p>
<p>Then I created a <strong>C:\boot\grub</strong> directory and saved initrd (installer) and linux (kernel) in <strong>C:\boot</strong></p>
<p>For the x86 architecture, the files can be downloaded from:</p>
<ul>
<li><a href="http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-i386/current/images/netboot/ubuntu-installer/i386/initrd.gz">http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-i386/current/images/netboot/ubuntu-installer/i386/initrd.gz</a></li>
<li><a href="http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-i386/current/images/netboot/ubuntu-installer/i386/linux">http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-i386/current/images/netboot/ubuntu-installer/i386/linux</a></li>
</ul>
<p>For amd64:</p>
<ul>
<li><a href="http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/initrd.gz">http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/initrd.gz</a></li>
<li><a href="http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/linux">http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/linux</a></li>
</ul>
<p><strong><a href="/assets/img/28.png"><img src="/assets/img/28.png" alt="" /></a></strong></p>
<p>Afterwards I copied <strong>C:\menu.lst</strong> to <strong>C:\boot\grub</strong> and edit it this way:</p>
<p><strong><a href="/assets/img/29.png"><img src="/assets/img/29.png" alt="" /></a></strong></p>
<p>Finally I added the entry to the Windows loader and rebooted the system:</p>
<p><strong><a href="/assets/img/30.png"><img src="/assets/img/30.png" alt="" /></a></strong>
<strong><a href="/assets/img/31.png"><img src="/assets/img/31.png" alt="" /></a></strong>
<strong><a href="/assets/img/32.png"><img src="/assets/img/32.png" alt="" /></a></strong>
<strong><a href="/assets/img/33.png"><img src="/assets/img/33.png" alt="" /></a></strong></p>
<p>At startup a new entry called <strong>Start GRUB</strong> will showed up. That’s it, it will bring up the rest of the process.</p>
<p>WARNING: This method requires an Internet connection through a wired interface, it may work with some wifi cards but the installer won’t recognize most of them so it’s better not to rely on it.</p>
<ul>
<li><a href="https://help.ubuntu.com/community/Installation/FromWindows">https://help.ubuntu.com/community/Installation/FromWindows</a></li>
</ul>
deb file structure2010-05-19T00:00:00+00:00http://javier.io/blog/en/2010/05/19/deb-file-structure<h2 id="deb-file-structure">deb file structure</h2>
<h6 id="19-may-2010">19 May 2010</h6>
<p>Deb packages are nothing but <a href="http://en.wikipedia.org/wiki/Ar_%28Unix%29">ar containers</a>, what set them apart (besides the sufix) are the 3 blobs they always contain.</p>
<ul>
<li>debian-binary: package version, normally 2.0</li>
<li>control.tar.gz: compressed package files containing <a href="http://en.wikipedia.org/wiki/Cryptographic_hash_function">checksums</a>, scripts (http://www.debian.org/doc/FAQ/ch-pkg_basics.html), metadata, etc.</li>
<li>data.tar.gz: compressed package files containing the program itself (commonly in binary format)</li>
</ul>
<p>NOTE: Modifying .deb packages directly is not the right way to do it. The formal procedure is described at the Debian packaging guide:</p>
<ul>
<li><a href="http://wiki.debian.org/HowToPackageForDebian">http://wiki.debian.org/HowToPackageForDebian</a></li>
</ul>
<p>On this example I required to extract some files from firefox-launchpad-plugin. I had already installed firefox from a third party source and Ubuntu wanted to install its own version as a dependency which wasn’t going to happen.</p>
<p>To uncompress deb packages a call to ar is enough:</p>
<pre class="sh_sh">
$ ar xv firefox-launchpad-plugin_0.4_all.deb
x - debian-binary
x - control.tar.gz
x - data.tar.gz
</pre>
<p>If all you want is to modify the package, you can extract the .tar.gz files, modify them and repackage them with:</p>
<pre class="sh_sh">
$ ar r firefox-launchpad-plugin_0.4_all.deb debian-binary control.tar.gz data.tar.gz
ar: creating firefox-launchpad-plugin_0.4_all.deb
</pre>
<p>On this example however I’ll only copy some files to the file system:</p>
<pre class="sh_sh">
$ tar zxvf data.tar.gz
./
./usr/
./usr/lib/
./usr/lib/firefox-addons/
./usr/lib/firefox-addons/searchplugins/
./usr/lib/firefox-addons/searchplugins/launchpad-bug-lookup.xml
./usr/lib/firefox-addons/searchplugins/launchpad-bugs.xml
./usr/lib/firefox-addons/searchplugins/launchpad-package-bugs.xml
./usr/lib/firefox-addons/searchplugins/launchpad-packages.xml
./usr/lib/firefox-addons/searchplugins/launchpad-people.xml
./usr/lib/firefox-addons/searchplugins/launchpad-specs.xml
./usr/lib/firefox-addons/searchplugins/launchpad-support.xml
$ find ~/.mozilla/ -type d -iname searchplugins
/home/javier/.mozilla/firefox/h5xyzl6e.default/searchplugins
$ mv ./usr/lib/firefox-addons/searchplugins/* ~/.mozilla/firefox/h5xyzl6e.default/searchplugins/
</pre>
<p>Done!, I don’t need to mess with a dependency hell for a bunch of files 😏</p>
<p><strong><a href="/assets/img/34.png"><img src="/assets/img/34.png" alt="" /></a></strong></p>
k estas haciendo? (curl + cookies + post)2010-03-09T00:00:00+00:00http://javier.io/blog/es/2010/03/09/k-estas-haciendo-curl<h2 id="k-estas-haciendo-curl--cookies--post">k estas haciendo? (curl + cookies + post)</h2>
<h6 id="09-mar-2010">09 Mar 2010</h6>
<p>En la <a href="http://mononeurona.org">MN</a> habia una sección de chat al estilo de twitter, no tenia una API definida, pero podia ser analizada y convertida para desplegarse en la consola.</p>
<p>El script muestra como puede usarse curl con cookies para enviar datos a través del protocolo http (post/get).</p>
<p><strong><a href="http://gist.github.com/3058885"><img src="/assets/img/25.png" alt="" /></a></strong></p>
my current desktop2009-10-15T00:00:00+00:00http://javier.io/blog/en/2009/10/15/current-desktop-setup<h2 id="my-current-desktop">my current desktop</h2>
<h6 id="15-oct-2009">15 Oct 2009</h6>
<p><strong><a href="/assets/img/5.png"><img src="/assets/img/5.png" alt="" /></a></strong>
<strong><a href="/assets/img/6.png"><img src="/assets/img/6.png" alt="" /></a></strong>
<strong><a href="/assets/img/7png"><img src="/assets/img/7.png" alt="" /></a></strong></p>
<p><strong>Ubuntu 9.04 + E17 + Ecomorp</strong></p>
less is more, and even more with color2009-06-05T00:00:00+00:00http://javier.io/blog/en/2009/06/05/less-with-color<h2 id="less-is-more-and-even-more-with-color">less is more, and even more with color</h2>
<h6 id="05-jun-2009">05 Jun 2009</h6>
<p><strong><img src="/assets/img/1.png" alt="" /></strong></p>
<p>I’ve just discovered how to colorize <strong>less</strong> output, It may seems unimportant but I really prefer to colorize my life when possible.</p>
<pre class="sh_sh">
$ ls -la --color |less -R
</pre>
<p><strong><img src="/assets/img/2.png" alt="" /></strong></p>
<p>The colors are defined by editing the <strong>~/.bashrc</strong> file</p>
<pre class="sh_sh">
# Less Colors for Man Pages
export LESS_TERMCAP_mb=$'\E[01;31m' # begin blinking
export LESS_TERMCAP_md=$'\E[01;38;5;74m' # begin bold
export LESS_TERMCAP_me=$'\E[0m' # end mode
export LESS_TERMCAP_se=$'\E[0m' # end standout-mode
export LESS_TERMCAP_so=$'\E[38;5;246m' # begin standout-mode - info box
export LESS_TERMCAP_ue=$'\E[0m' # end underline
export LESS_TERMCAP_us=$'\E[04;38;5;146m' # begin underline
</pre>
<p><strong><img src="/assets/img/3.png" alt="" /></strong></p>
<p>The same trick can be used for other commands who output colorized messages (except for those who they detect when stdout is going through a pipe such as grep):</p>
<pre class="sh_sh">
$ tree -Ca /sys/ | less -R
</pre>
<p><strong><img src="/assets/img/4.png" alt="" /></strong></p>
<p>More color codes can be consulted at: <a href="http://ascii-table.com/ansi-escape-sequences.php">http://ascii-table.com/ansi-escape-sequences.php</a>.</p>
geek songs2008-12-23T00:00:00+00:00http://javier.io/blog/en/2008/12/23/geek-songs<h2 id="geek-songs">geek songs</h2>
<h6 id="23-dec-2008">23 Dec 2008</h6>
<p>Here are some geeky songs I was not aware of, it’s not that I spend all my day looking for them 😉</p>
<h4 id="code-monkey">Code monkey</h4>
<div id="youtube">
<object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/5W_wd9Qf0IE?hl=en_US&version=3" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/5W_wd9Qf0IE?hl=en_US&version=3" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<p>[+] <a href="http://www.litetext.com/4pwd">Lyric</a></p>
<h4 id="kill--9">Kill -9</h4>
<div id="youtube">
<object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/Fow7iUaKrq4?hl=en_US&version=3" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/Fow7iUaKrq4?hl=en_US&version=3" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<p>[+] <a href="http://www.litetext.com/zx2x">Lyrics</a></p>
<h4 id="white-and-nerdy">White and nerdy</h4>
<div id="youtube">
<object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/Nh9mVsBKwYs?hl=en_US&version=3" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/Nh9mVsBKwYs?hl=en_US&version=3" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<p>[+] <a href="http://www.litetext.com/1ah4">Lyrics</a></p>
<h4 id="mc-hawking--entropy">Mc hawking -Entropy</h4>
<div id="youtube">
<object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/2knWCuzcdJo?hl=en_US&version=3" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/2knWCuzcdJo?hl=en_US&version=3" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<p>[+] <a href="http://www.litetext.com/43qb">Lyrics</a></p>
<h4 id="mac-or-pc">Mac or PC</h4>
<div id="youtube">
<object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/Jkrn6ecxthM?hl=en_US&version=3" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/Jkrn6ecxthM?hl=en_US&version=3" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<p>[+] <a href="http://www.litetext.com/g87q">Lyrics</a></p>
<h4 id="the-geeks-get-the-girls">The geeks get the girls</h4>
<div id="youtube">
<object width="420" height="315"><param name="movie" value="http://www.youtube.com/v/pDcz43pt6r4?hl=en_US&version=3" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/pDcz43pt6r4?hl=en_US&version=3" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<p>[+] <a href="http://www.litetext.com/b4d5">Lyrics</a></p>
<h4 id="sunny-sunny-sunday">Sunny sunny sunday</h4>
<div id="youtube">
<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/B1b-oM72Pac?version=3&hl=en_US" /></param><param name="allowFullScreen" value="true" /></param><param name="allowscriptaccess" value="always" /></param><embed src="http://www.youtube.com/v/B1b-oM72Pac?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
</div>
<p>If you know more let me know, most of the tracks were grabbed from <a href="http://www.catonmat.net/blog/category/musical-geek-friday/">catonmat.net</a>.</p>
kexec reboot2008-11-04T00:00:00+00:00http://javier.io/blog/en/2008/11/04/kexec-reboot<h2 id="kexec-reboot">kexec reboot</h2>
<h6 id="04-nov-2008">04 Nov 2008</h6>
<p>Since the 2.6 Linux kernel version came out there is a new way to reboot quite fast. <a href="http://en.wikipedia.org/wiki/Kexec">Kexec</a> is a new call system who replaces the running kernel with a new one without the need to go through the bios initialization process. It means than now you’re able to reboot faster, taking away 20, 30 or even 60 seconds from the boot process.</p>
<p>To use it, the “kexec-tools” package must be installed and the option “CONFIG_KEXEC” enabled, after setting up the system, kexec can be used this way:</p>
<pre class="sh_sh">
$ kexec -l /boot/vmlinuz --command-line="`cat /proc/cmdline`" --initrd=/boot/initrd
$ kexec -e
</pre>
<p>The first line loads the kernel in memory and returns the control to the user, it’s now up to you to decide when to “reboot” the system (the second line will do it), <a href="http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_MRG/1.0/html/Realtime_Tuning_Guide/sect-Realtime_Tuning_Guide-Realtime_Specific_Tuning-Using_kdump_and_kexec_with_the_RT_kernel.html">some persons</a> are already using this technique to load new kernels when the running one panics.</p>
<p>In the current upstream implementation, kexec doesn’t auto unmount the plugged devices, so it’s must be done manually:</p>
<pre class="sh_sh">
$ kexec -l /boot/vmlinuz --command-line="`cat /proc/cmdline`" --initrd=/boot/initrd
$ sync
$ umount -a
$ kexec -e
</pre>
<p>Fortunately the Debian/Ubuntu maintainers had already integrated this logic on the reboot/halt scripts and therefore it’s now possible to reboot the system without unmounting anything:</p>
<pre class="sh_sh">
$ kexec -l /boot/vmlinuz --command-line="`cat /proc/cmdline`" --initrd=/boot/initrd
$ shutdown -r now
</pre>
<p>In <a href="http://lizards.opensuse.org/2008/10/13/automatic-reboot-with-kexec/">">OpenSuse 11.1</a> kexec may even be an opt-in. The day where I never shutdown my computer gets closer.</p>
<ul>
<li><a href="http://www.ibm.com/developerworks/linux/library/l-kexec.html">http://www.ibm.com/developerworks/linux/library/l-kexec.html</a></li>
<li><a href="http://www.linux.com/feature/150202">http://www.linux.com/feature/150202</a></li>
<li><a href="http://lwn.net/Articles/15468/">http://lwn.net/Articles/15468/</a></li>
<li><a href="http://code.google.com/p/atv-bootloader/wiki/Understandingkexec">http://code.google.com/p/atv-bootloader/wiki/Understandingkexec</a></li>
</ul>
gentoo useflags2008-07-25T00:00:00+00:00http://javier.io/blog/en/2008/07/25/useflags-gentoo<h2 id="gentoo-useflags">gentoo useflags</h2>
<h6 id="25-jul-2008">25 Jul 2008</h6>
<p>These are the flags I’m using for a pentium-m laptop.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#console USE="${USE} bash-completion gpm ncurses slang fbcon"
#graphical interface USE="${USE} dbus X gnome -kde cairo libnotify "
#cd/dvd USE="${USE} cdr -dvdr dvdread "
#hardware USE="${USE} -3dfx -3dnow acpi -apm -altivec bluetooth hal
ieee1394 ipod lirc lm_sensors mmx hddtemp -mpi -multilib -netboot nocd pcmcia
pda ppds -scanner sse sse2 usb wifi gphoto2 opengl"
#dev USE="${USE} cscope dbm doc emacs examples expat -fortran -gcj
gtk -ifc -jikes java java6 javascript -mule pcre perl php python -qt3 -qt4
readline ruby sdl spl subversion"
#net USE="${USE} -aim cups -freewnn ftp -icq idn imap ipv6 jabber libgda
mime mozilla -msn -oscar samba sockets socks5 ssl vhosts -yahoo evo mailwrapper
rss"
#sound USE="${USE} alsa -oss ao esd osc ladspa lame
libsamplerate pulseaudio aac -arts audiofile -cddb cdparanoia dts flac jack
lash mad matroska mikmod modplug mp3 musepack musicbrainz&nbsp; ogg openal
shorten sox speex vorbis "
#misc formats USE="${USE} bzip2 pdf xml zlib "
#images USE="${USE} imagemagick gif jbig jpeg jpeg2k lcms mng openexr png
raw svg -wmf xpm"
#video USE="${USE} a52 aalib dv dvb dvd encode exif
ffmpeg gstreamer libcaca mpeg mplayer quicktime theora v4l2 vcd -win32codecs xv
xvid"
#security USE="${USE} clamav cracklib crypt pam syslog "
#random USE="${USE} -accessibility -bindist cdinstall -debug fam nptl
offensive -old-linux posix session source spell threads truetype unicode videos
xprint xscreensaver nls"
#dangerous flags: alpha, amd64, arm, hppa, ia64, mips, ppc, ppc64
#ppc-macos, s390, sh, sparc, x86
#*******************************************
#*******************************************
</code></pre></div></div>
<p>After changing use global flags, gentoo needs to be recompiled:</p>
<pre class="sh_sh">
$ emerge --update --deep --with-bdeps=y --newuse world
</pre>