Making symbolic link named ".m2" to Gradle JAR repo

After many years of using Maven, I am used to that my JARs are in ~/.m2/ directory, so I created myself a link as follows:


$ ln -s ~/.gradle/caches/modules-2/files-2.1 ~/.m2 

$ ls -alt ~/.m2/
total 0
drwxr-xr-x  65 ukilucas  staff  2210 Apr 28 23:18 .
drwxr-xr-x   3 ukilucas  staff   102 Apr 28 23:18 junit
drwxr-xr-x   4 ukilucas  staff   136 Apr 28 23:18 org.hamcrest
drwxr-xr-x   3 ukilucas  staff   102 Apr 28 14:15 com.android.tools.external.lombok
drwxr-xr-x   3 ukilucas  staff   102 Apr 28 14:15 org.abego.treelayout

drwxr-xr-x   3 ukilucas  staff   102 Apr 28 14:15 com.intellij
..


As an Amazon Associate I earn from qualifying purchases.

Anti-obesogenic effect of apple cider vinegar in rats subjected to a high fat diet

 

Anti-obesogenic effect of apple cider vinegar in rats subjected to a high-fat diet

Resources





As an Amazon Associate I earn from qualifying purchases.

Running Groovy with multiple command line parameters


Source for ArgumentsTest.groovy


#!/bin/bash
//usr/bin/env groovy  -cp extra.jar:spring.jar:etc.jar -d -Dlog4j.configuration=file:/etc/myapp/log4j.xml "$0" $@; exit $?


def params = ""
args.each() {
    if (it) {
        params +=   it + "! "
    }
}

println "Hello World " + params

Execute permission

$ sudo chmod +x ArgumentsTest.groovy
Password:

Output


$ ./ArgumentsTest.groovy Uki Natalia Zoe
Hello World Uki! Natalia! Zoe!


As an Amazon Associate I earn from qualifying purchases.

Mac Groovy installation with brew

$ brew install groovy
==> Downloading https://dl.bintray.com/groovy/maven/apache-groovy-binary-2.4.6.zip
######################################################################## 100.0%
==> Caveats
You should set GROOVY_HOME:
  export GROOVY_HOME=/usr/local/opt/groovy/libexec
==> Summary
🍺  /usr/local/Cellar/groovy/2.4.6: 63 files, 27.6M, built in 6 seconds
$ groovy -v
Groovy Version: 2.4.6 JVM: 1.8.0_77 Vendor: Oracle Corporation OS: Mac OS X



If you like this post, please donate 2 cents ($0.02 litterally) to show token of appreciation and encourage me to write more:

Donate Bitcoins



As an Amazon Associate I earn from qualifying purchases.

OnePlus One Android 6

Finally!



As an Amazon Associate I earn from qualifying purchases.

Gradle DSL method not found: 'runProguard'

The error appears when running Android Gradle:

Gradle DSL method not found: 'runProguard'

to fix it open app/build.gradle and fix this part as such:

    buildTypes {
        release {
            // runProguard false // not supported            

             minifyEnabled false            

             proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'

        }
    }



As an Amazon Associate I earn from qualifying purchases.

Android Studio: Android SDK and Java

Updating Android SDK in Android Studio

Select all API levels you want to develop against.




If you like this post, please give me your 2 cents ($0.02 litterally) to show token of appreciation and encourage me to write more:
Donate Bitcoins



As an Amazon Associate I earn from qualifying purchases.

9a. Gradle upgrade

In this tutorial we will overview basics of Gradle build system.

Step 1: verify what Gradle version you have

$ gradle -version

------------------------------------------------------------
Gradle 2.13
------------------------------------------------------------

Build time:   2016-04-25 04:10:10 UTC
Build number: none
Revision:     3b427b1481e46232107303c90be7b05079b05b1c

Groovy:       2.4.4
Ant:          Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM:          1.8.0_77 (Oracle Corporation 25.77-b03)
OS:           Mac OS X 10.11.4 x86_64



Step 2: update to the newest 

if you are missing entirely (on Mac using brew):

install brew on Mac:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

and now install gradle on Mac

brew install gradle

If you are updating (on Mac using brew):

brew update && brew upgrade gradle
...
==> Downloading https://downloads.gradle.org/distributions/gradle-2.1-bin.zip

######################################################################## 100.0%



$ gradle -version
------------------------------------------------------------
Gradle 2.1
------------------------------------------------------------
Build time:   2014-09-08 10:40:39 UTC
Build number: none
Revision:     e6cf70745ac11fa943e19294d19a2c527a669a53
Groovy:       2.3.6
Ant:          Apache Ant(TM) version 1.9.3 compiled on December 23 2013
JVM:          1.8.0_20-ea (Oracle Corporation 25.20-b05)
OS:           Mac OS X 10.9.4 x86_64

$ which 

gradle/usr/local/bin/gradle



If you like this post, please give me your 2 cents ($0.02 litterally) to show token of appreciation and encourage me to write more:
Donate Bitcoins



As an Amazon Associate I earn from qualifying purchases.

Blockchain

BlockChain

Let’s say that you get excited about having your own 256 core cluster computer build using 64 Pi SBCs, but unless you are a big lab processing many genomic orders, you might find the system underutilized at times. It would be good to put the hardware to use to make money. Blockchain may be an interesting option.

Bitcoin mining is not an option anymore

Few years back, around 2009-2012, it was a great time to use extra computer processing power to mine Bitcoins. Bitcoin is the first and so far the most popular electronic currency. To mine Bitcoins, means to have your computers run transaction verifications for the Bitcoin network, therefore supporting its infrastructure. The operations are not very complex and do not require too much processing power, but they have to be very fast to compete with other Bitcoin miners. The verifications use hashing algorithm called SHA256, which is the most important part of the operation of the miner. The speed of the miner is determined by how many hashes they can perform per second (hash/s).
The times of general purpose computer Bitcoin mining are long gone, 800MHz Raspberry Pi was able to do about 100 kilo-hash/s (x1,000 hashes) which is plenty to support the needs of the currency, however because of the rules of Bitcoin in particular, only the fastest miners get awarded the monetary reward and the rest of this massive computational power is basically wasted and not rewarded. 
When the general purpose computer CPU mining was no longer competitive, there was a period of GPU mining which achieved range of few mega-hash/s (x1,000,000 hashes). The SHA256 algorithm runs very well on massively parallel GPU, however this ended shortly as well because of the development of SHA256 ASIC (application specific integrated circuit) which could move the speed into tera-hash (x1,000,000,000,000 hashes) levels.
As the more powerful miners come online the difficulty of the operations is artificially increased to make the competition even harder, today the unit price for a miner like Neptune 3 tera-hash/s ASIC is about $10,000.
I find this trend unfortunate, as it leads to artificial arm race where only the richest can afford the hardware for mining. In my opinion it is also the biggest weakness of the Bitcoin.
Today, Bitcoin network is considered the biggest supercomputer in the world.  The total power of the Bitcoin network is measured in thousands of peta-hashes (or 1.2  exa-hashes/s, or x1,200,000,000,000,000,000 hashes in April 2016) and as I have already mentioned it is a major waste of resources, money and electricity as these computers are power hungry heat emitters. 
The only reason to increase hashing difficulty is to slow miners down and keep the ability of the network to find blocks every 10 minutes on average. Ten minutes is forever as far the the purchase verification is concerned. Considering that one 10 minute block consist of 500 to 2,500 transactions, which is very little in context of global economy, the system really needs a revision. Do not get me wrong, I still buy Bitcoin, but I mostly invest in alternative BlockChain solutions that I find technologically superior.

Blockchain proof-of-work equilibrium

When considering BlockChain as a tool we might want to use, it should choose the optimal configuration of that BlockChain. 
All miners should be rewarded for their actual work which is the support of the needs of BlockChain transactions, this should encourage every participant (coin owner) to run the full node mining software on their PC. As a result, the overall network would be more distributed, therefore healthier and budget-minded. 
Currently, 10 minute Bitcoin block winner takes it all, or about $11,000 at current prices coin prices.
The perfect equilibrium situation “miner fee” would be less glamorous, but very consistent. When distributed among about 6,500 nodes running today, it would come to about $245 a day, the number would depend on the size of the full node network.
The total size of the network could be controlled by proof-of-ownership, where only owners could run one full node per x coins they own, this makes sense as owner would have a stake in maintain the rules and status quo. The amount of ownership would not have to be large, probably in the range of $1,000 per full node. The idea would be to make full node operators border-line profitable without excess that would drive arm race.
Believe, or not, but a SBC like 1.4GHz 4-core Pi is powerful enough to run a full node at 2 mega-hash per second. As of this writing the 64 GB of storage is not enough for the full Bitcoin ledger so alternative solutions like USB storage would have to be considered. 
A minimum block size would have to be established for transactions to be verified as fast as possible, the target rate should be about 10,000 transactions per second to meet future global transaction demand.
Of course the faster computer would be able to build the blocks faster, but since the reward would not be related to the speed, the power-efficiency would be more actually profitable. 
If no node was able to build the block in x seconds, the bonus award would go to the first winner. This would assure that owners would maintain machines to meet current demand, but avoid to be energy wasteful.

BlockChain as proof-of-work for Genomic calculations 

The BlockChain in our case could be used to maintain the proof-of-work of the genomic analysis. In most simple terms this means that your idle hardware could perform calculations for other scientists by running predefined (and therefore safe) software using their data. The cost of these operations would be defined by the total network demand. The coin tokens to pay for service could be exchanged with fiat currencies  such as USD, etc. when funding of the project is available, but also it would circulate around as researchers would use each other’s hardware as needed.

The should also be small percentage of “developer fees” to pay per transaction to cover the cost of state-of-the art software to run the system.
- - - -

I will be publishing bits of information about my research (genomics, cluster hardware, blockchain, software) on this blog, but please support my efforts by getting the book.

The book is available Amazon.com, Barnes & Noble Nook and Google Books, but you can get it directly and 80% cheaper here:


If you would like to receive a copy of this book in ePub format, please donate $4 (or more) via bitcoin and I will email a copy to you:
Donate Bitcoins



As an Amazon Associate I earn from qualifying purchases.

Genomics - easy as Pi

I have published my new book about using building a very affordable, 256 core, Single Board Computer cluster for use as distributed computing platform for big data bioinformatics. 

I will be publishing bits of information about my research (genomics, cluster hardware, software) on this blog, but please support my efforts by getting the book.

The book is available Amazon.com, Barnes & Noble Nook and Google Books, but you can get it directly and 80% cheaper here:


If you would like to receive a copy of this book in ePub format, please donate $4 (or more) via bitcoin and I will email a copy to you:
Donate Bitcoins



As an Amazon Associate I earn from qualifying purchases.

My new eBook: "Genomics - easy as Pi" - DIY parallel cluster computers in big data genetic research






This book has been inspired by recent convergence of two sciences, both of which are my life-long passions, both of which for the first time this year are becoming affordable to a an average person: genomics and cluster computers.
The field of genomics has exploded in the last few years beyond belief, the original human genome sequencing project, finished in year 2000, took 13 years and $3 billion to complete. Today, the cost of sequencing of the whole genome is approaching $800 (in bulk) and can be done in couple of hours.
The genome research has been concentrated around the prestigious institutions with generous grants that could afford access to newest sequencing technology. The positive outcome of the research sponsored by the public funds is that the results are also public and anyone can have access to genetic sequence information from the Web base databases and FTP sites. With a quick search you can get sequences of many organisms ranging from common bacteria, yeast, corn, wheat, fruit flies, mice, rats, extinct mammals, monkeys, apes, Neanderthal and many humans. Sequencing the next genome takes hours and there are thousands of them being sequenced now, as you read it.
For couple of hundreds of dollars you can determine presence of some interesting sequences using companies like 23andMe, best of all you can download the raw data of your test and start comparing it against other genomes or databases of genes immediately.
At the same time the medical field is learning about hundreds of thousands of proteins and trying to figure out which genetic sequences code for them. Doctors are discovering the genetic association of many diseases and individual drug interactions.
Each human genome is composed of 3.3 billion letters (base-pairs), comparing it against multiple other genomes requires some serious processing power. There are other organisms such as loblolly pine (Pinus taeda) that have 23 billion base pairs in their DNA, that is 7 time more than human!  Due to the sheer amount of data being generated every day there is a vast opportunity for new software tools and new applications of that knowledge.

The field of genomics is growing faster than any other technological advance in human history and few would argue that with potentially the biggest impact on our future lives since we learned how to use fire.



You will be able to find it on Amazon, Barnes and Noble Nook and Google Books soon:

http://www.amazon.com/kindle/dp/B01E1VQ3EK/ref=rdr_kindle_ext_eos_detail


As an Amazon Associate I earn from qualifying purchases.