Friday, October 20, 2017

Running arbitrary command when using auto-completion in Zsh

Did you ever wanted to run an predefined command as auto-completion in zsh?
For example, you may have a script that accepts only predefined values as input.
Actually it's possible and quite easy.

First, you will need a file that will be invoked to auto-complete the command.
Here's an example:

You can replace the HERE_COMES_SHELL_COMMAND with any command. For example it can be "cat ~/myfile" to read options from a file.
#compdef defines the list of commands this file will auto-complete. In the above example it's going to be mycommand, change it to your actual command.
As I already mentioned, it can be also a list: #compdef mycommand myscript myprogram - will autocomplete any of the mycommand, myscript, myprogram.

Now, you need to tell zsh about your file:
1. Place your file to some directory. For example: ~/.myautocomplete
2. In your ~/.zshrc add the following line: fpath=(~/.myautocomplete $fpath)
3. After this line add the following lines:
autoload -U compinit

On MAC I used the following instead:
autoload -U compaudit compinit

4. You may need to delete files that start with .zcompdump in your home directory.
5. Start a new shell, and it should work.

If you are using oh-my-zsh, it's a bit easier:
1. Create your directory under ~/.oh-my-zsh/plugins
For example ~/.oh-my-zsh/plugins/myautocomplete
2. Place this file in this directory.
3. Edit ~/.zshrc, find "plugins" and add "myautocomplete" to the list.
4. Start a new shell.

Tuesday, October 10, 2017

How do I avoid the error "Unable to validate the following destination configurations" when using S3 event notifications in CloudFormation?

There is a very important post about avoiding the "Unable to validate the following destination configurations" in AWS.
Too bad it's not mentioned just next to both S3 and SNS/SQS reference documentation.

BUT! This post is lacking some important part: You will get this error even if you didn't specify the TopicPolicy (or QueuePolicy) at all!
Furthermore, you will get this error even if you specified the policy, but it's not correct.
For example, if your policy is too restrictive and S3 would not be able to send events to SNS, you will also get this error! Is it clear from error's description? Not really. Is it clear from the above's AWS post? No, not at all.

So just remember, when you see "Unable to validate the following destination configurations" - check the Policy. It may be lacking. It may be too permissive or too restrictive, but the problem is with the policy, and not with a bucket.

Wednesday, October 4, 2017

CloudFormation Tips

Some tips of using the CloudFormation:

1. Don't specify a resource name, unless absolutely must doing so. This way you can avoid names clashes, since CloudFormation will automatically assign unique names to your resources.
2. If you need to specify a name, include the stack name in it. This way you will reduce the potential name clashes. You can also include a partition and region for resources that are available globally (e.g. S3 bucket names). Note that this will NOT prevent the potential naming clash completely, since somebody else can also use the same name.
3. When creating any IAM resources in your stack, make sure to add DependOn in the resources that use these IAM resources. Apparently CloudFormation is not smart enough to resolve this dependency tree and handle it without additional configuration.
4. Sometimes the names the CloudFormation will give to your resources is completely unrelated to the stack name. Include the ARN of such resources in the Outputs, so you can easily find them later, when needed.
5. Very common scenario in AWS is a S3 bucket that fires events to SNS or SQS, when a file is uploaded. Apparently it's impossible to create it in single change. See this post.

See also:

Sunday, April 10, 2016

Print Gradle Dependencies

One way to print the project's Gradle dependencies is 'gradle dependencyReport'.
However it creates a very large file with many scopes that are sometimes hard to track.
Sometimes it can be useful just to print the list of dependencies of a specific scope.
A very small script can do the job, and here are some examples:

Monday, November 23, 2015

Dropwizard: Add thread name to log

This should have been trivial, but somehow it isn't.
So I'll put it here.
Adding the thread name to Dropwizard logs:

Unix Shell: Use of functions to create complicated aliases

In *nix shells it sometimes useful to create aliases that receive parameters.
This can be done using functions:

Now you can type just something like: kssh or pssh

Sunday, January 18, 2015

Pretty Format of JSON in vim

Open ~/.vimrc
Add the following line:
command Json :%!python -m json.tool

To format Json, type :Json
Note: you need Python installed.

Inspired by this post.

Wednesday, May 14, 2014

Monitor Java on Unix/Linux/Solaris

Just a short memo of useful commands that can help to troubleshoot Java on *nix:
(Btw, most of them will also work on Windows, but who runs Java on Windows? Just kidding...)

jps -m - show running Java processes and their pids.
jstack <pid> - print thread dump
jmap -dump:format=b,file=<path to file> <pid> - save heap dump to file

Thursday, August 1, 2013

UnresolvedAddressException Tip

Getting java.nio.channels.UnresolvedAddressException?
Having no idea why does this happen?

Check the code that creates the address. Did you use, int) to create it?
Do NOT! Just new and it should be fixed.

P.S. This is kind of a post I write here after spending hours on a stupid bug.
So people can google it out and spend less time on it.

Tuesday, May 21, 2013

DevOps: Making Fast Deployments of Java Servers using Maven and Nexus

A Warning: this post is theoretical. I have never tried something like this yet. Maybe I will try it in the future. But currently it's just a nice idea.
In addition, if you know about somebody who works in a similar way, I would really like to know. So please comment!

If you provide a SAAS service you probably have multiple Java servers running in some sort of a cluster. If your SAAS solution is complicated, and if your solution is multi-tier, you should have multiple servers types. And now comes a question: How to make quick deployments to the production?

The common solution suggests that you build a package and release it. It might be a war, or a zip, or a rpm if you are running on Linux.
Once released, you upload the package to the server, unzip/copy it to the relevant folder and restart the server.

The problem with this solution might be if your packages are large. (And if you are using OSGi, your packages are usually very large!) So the upload itself takes time. It also uses traffic which might become expensive if you perform a lot of deployments. And the really funny thing is that most of the upload is redundant: most of the jars in your package are third parties that do not change between the deployments at all!

The common solution suggests pre-uploading the third party jars to the server and exclude them from the package. I've seen such a solutions and in my opinion they are the exact opposite of a good solution: in this way you split the package, the third parties become manages in two (sometimes more) places and each deployment involves at least additional (probably manual!) step of checking if the third parties were changed and if additional deployment if third parties is required.

But if you use Maven. And if you upload your released packages to Nexus (or actually any other Maven repository). This Nexus repository contains all the third parties, all the released packages and the most important: The pom file that was used to build your project!
If you download this pom file, you will be able to build the package on the production server! Pay attention that you don't need to do the full build that includes the compilation, testing and so on. You just need your package, so considering that you deploy a war, you only need to run the "mvn war:war" (Once again: I never tried it myself and the actual execution might be more complex, but I think that the idea is clear).
Sometimes, if you a running a java application with a main class (pure old java and not some kind of JEE inside the application server or a servlet container), you don't even need a package, you just need a correct classpath and Maven will be happy to assist you: mvn dependency:build-classpath.

So I guess that the idea is clear now. Each time Maven will download only the relevant jars and save them to the local repository. The dependencies are managed in the same pom file that is used to release the application, so when making a package, or creating a classpath on a production machine, the exact same dependencies will be used.
And the deployment process will become much faster!

I know that this idea is somewhat different from the usual process. Instead of doing some like "build, deploy, run", we do something that might look even more complicated: "build, deploy descriptor only, package, run". But this should be much faster. So I definitely think that this idea is worth trying.

P.S. The idea described in this post relates only to the package itself: building, packaging and running. The deployment may contains additional steps like changing the local configuration files and so on. These steps are not covered here as they are usually not covered in a build process, but part of release notes. The possible solution can be deploying the relevant scripts to Nexus repository and somehow describe them in a pom file. When downloading the pom, the relevant scripts will be also downloaded and executed.

P.P.S. The idea also doesn't cover the tool that makes the whole process. Although it describes that the tool is using Maven, it says nothing about the actual implementation. It might be a java process. Or a shell script. Or even Ant.

P.P.P.S. Notice that downloading files from Nexus using Maven makes important checks for you, for example it makes an integrity check, which is very important in case of a bad network between the Nexus with releases and a production site.
In addition, you can make some optimizations on Nexus. For example, if you have several production sites all over the world, each site may have a Nexus pointing to the main release repository and caching it. This will make the deployments even faster.

Recommended Reading