My favorite tiny scripts for development, testing and productivity.
Script to make it easier to start developing your new Perl module under pressure of unit tests written in Test::More and the likes.
Suggested use is to have it sit on the other monitor, run all .t files around and around and be annoying about the fact that they fail. Then the rule of thumb goes without saying: "no green, no commit".
Binary dump. Reads STDIN, 4 bytes at a time and displays it in similar way as
hexdump(1)
does with -C
option (Canonical hex+ASCII display)
Example:
$ echo "Hello world" | ./bd
00000000 01001000 01100101 01101100 01101100 |Hell|
00000004 01101111 00100000 01110111 01101111 |o wo|
00000008 01110010 01101100 01100100 00001010 |rld.|
$
czkrates [-d] [-v] [-D DATE] [AMOUNT] [CURRENCY]
czkrates [-d] [-v] [-D DATE] [CURRENCY] [AMOUNT]
Show CZK rate for currency CURRENCY. If AMOUNT is given, multiply by
that number. Use -D
to get rate for past day, the DATE format is
the same as for date utility from coreutils. -d
and -v
turn on
debugging and verbosity, respectively.
Split URLs into components and arguments. Output is useful to see more easily what is or is not in the URL and/or compare URls using standard tools like diff.
$ dissect_url "proto://srv:port/a/query?par1=foo&par2=bar#joe&mary"
proto://srv:port
/a/query
?
par1=foo&
par2=bar
#
joe
Note that by removing all whitespace from the dissected URL you should get the original URL.
To enter multiple URLs, simply omit the argument; script will go into filter
mode, where you can enter URLs one per line. Quit this mode by entering EOF
(Ctrl+D
) or an empty line. From this mode, output will be separated like this:
$ dissect_url < two_urls
=== url 01 =============================================
url1
=== url 02 =============================================
url2
mkx [-f|--force] [+TEMPLATE] FILE
mkx -l|--list
Make executable script, i.e. create new file, add shebang line and template,
and mark it executable (0755). If TEMPLATE is not given, language is guessed
from FILE suffix. If the file already exist, gives up, unless -f
option
is passed.
Use -l
to list supported TEMPLATEs
hterrs URL
Show errors seen when connecting to URL.
Content is ignored, and instead just errors are printed to stdout so that they can be used in notifications, etc. Errors collected include socket errors, TLS/SSL errors and HTTP errors (eg. 404).
Exit status is zero if there WERE errors, one if there were none, two if this script was used incorrectly and three and more in case of other failure circumstances.
overduer [-d] [--] [FILTER]
overduer --help
Wrapper around TaskWarrior that helps you quickly re-schedule tasks that are over or quickly approaching their due date.
The main idea behind overduer is that if "pushing" tasks becomes extremely easy and fast, you can develop a habit of constantly re-visiting your tasks without it being a nuissance. This will help you avoid the most common risk asssociated with task planning:
completely forgetting about them (if you set no due date or too much in the future)
being frustrated by the ever-growing pile,
or learning to just ignore TaskWarrior altogether.
overduer is heavily inspired by Git's interactive rebase: it uses vipe utility to open simple line-based list that you can edit and save, while lines that you did not edit or lines that you deleted are ignored.
This gives perfect balance between speed and safety: First, it only takes few seconds to deal with tasks you are immediately sure about and just ignore the rest. Also it provides way to bail out from mistake: just delete the whole buffer, exit editor (save changes!) and start over.
Wrapper around file utility. Reads STDIN, stores it in a temporary file (using Python's tmpfile.mkstemp), calls file on it and returns output.
This is useful in cases like debugging a HTTP server with utility like curl, and when we don't want to see the actual output, yet still want to know what it looks like. Using pfile on pipe, we can easily combine the power of file with the simplicity of curl:
us@here:~$ curl -4 -v http://www.example.com/ | pfile
* About to connect() to www.example.com port 80 (#0)
* Trying 1.2.3.4...
* Connected to www.example.com (1.2.3.4) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.26.0
> Host: www.example.com
> Accept: */*
>
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 200 OK
< Date: Tue, 22 Oct 2013 10:41:28 GMT
< Server: Apache/2.2.22 (Debian)
< Last-Modified: Wed, 17 Jul 2013 17:22:00 GMT
< Accept-Ranges: bytes
< Content-Length: 123
< Vary: Accept-Encoding
< Content-Type: text/html
<
{ [data not shown]
100 123 100 123 0 0 1169 0 --:--:-- --:--:-- --:--:-- 1369
* Connection #0 to host www.example.com left intact
* Closing connection #0
/tmp/tmplZbSB4: ASCII text
us@here:~$
Read YAML/Perl data structure from a file and dump it in the other format to STDIN. Uses YAML::Tiny for YAML jobs.
Translate to and from Czech using slovnik.cz service.
se [options] word
By default, outputs few Czech translations of it, one per line. Also supports other languages (about 10 in total).
Most useful options are --lines
(default is 25), --long
, as shorthand
to --lines=50
, and --direction
, which supports direction keyword in form
"LNcz.cz" or "LNcz.LN", where LN is 2-letter code (not ISO) of the other
language.
Uses www.slovnik.cz, so an Internet connection and LWP::Simple are needed.
Has POD doc (se --man
or se --help
) worth looking at.
Script to measure how long does one second take. For those that know how long
one second takes, it can serve as a snippet for Perl &stamp()
.
Wrapper around vim to store timestamps for certain files before edit and restore them afterwardds.
This is designed for my bloxsom files, therefore it's hard-coded to look for files that contain my blosxom path in the pattern; however, this can easily be altered.
The background is that Blosxom blogging system stores articles as plain files, and uses the file stamp as article date. I don't want articles to pop-up just because I did a minor typo-fix later.
Prints a text file, clears the screen and pauses for 2s over and over.
Designed mainly for use with helper::dmup();
to enable you to see changes
in your dumped data structure continuously, but obviously you can use it for
any text file that will fit your screen.
For improved visual feedack, it will pre-pend the file contents with the file
path and an "animation". Display of the header can be controlled by options,
see --usage
.
Prints overview of TCP connection details (IP addresses, ports, server user@host, time); 7 lines of plain text.
Collects GET requests with parameters msg, tag and i and logs them into single text file.
name=john;age=32;state=il
test01-012
.. test01-013
to
store the iteration number, you'll be better off with this parameter as
it won't break your ability to use tagsSend a 7-bit plain-text file via HTTP. One of these is sent:
random content of random length given by parameters min and max
EICAR test virus file
Chance to receive EICAR is given by parameter eicar (0-100).
Container module for some utility methods for Perl. Probably only dmup()
is interesting—it bears a nice quick and dirty way for dumping Perl
data.
APIs to make usage of htlog.cgi in Perl and Python scripts even easier
use htlogr;
my $logger = htlogr::new('http://192.168.1.1/cgi-bin/htlog.cgi');
# we don't need tag nor iteration number, but it can be useful
my $tag = "synopsis_test";
$logger->log("Commencing synopsis test", $tag);
my $data = {
foo => 1,
bar => "Hello world"
};
foreach my $i (1..1000) {
# log normal messages--with I!
$logger->log("next 10 done!", $tag, $i) unless ($num % 100);
# or a simple one-level data structures
$logger->data(
my_func_returning_hashref($data),
$tag,
$i
);
}
Note that htlogr also supports passing of callable code instead of i
or
tag
. Use this if you find yourself constructing them in a non-trivial
way before every call.
I'll illustrate this with Python API, but of course implementation and use is same in both APIs.
Imagine situation when existence of a certain environment variable tells us context in which we are running (e.g. a specific test case) and its value is the iteration (e.g. Jenkins build number).
This example examines environment for existence of such variable and then uses its name as tag and value as i.
The code:
import htlogr
import os
logger = htlogr('http://192.168.1.1/cgi-bin/htlog.cgi')
def get_both():
for key in ['var1', 'var2']:
try:
return os.environ[key], key
except KeyError:
pass
def get_i():
i, tag = get_both()
return i
def get_tag():
i, tag = get_both()
return tag
logger.log("hello", tag=get_tag, i=get_i)
Now you can e.g. write a logging wrapper function in a trivial yet flexible way:
# inside a class:
def rmsg(self, mesage):
self.logger.log(message, i=self.get_i, tag=self.name)
def rwarn(self, mesage):
self.logger.log('warning: ' message, i=self.get_i, tag=self.name)
def rstats(self, stats):
self.logger.data(stats, i=self.get_i, tag=self.name)