It might be a little late to the party, but I finally got around to putting together some responsive stylesheets. This site should now be rocking out on the iPhone like it’s 2007! The old stylesheet scaled well enough, but the updates should hopefully make things a bit more elegant on small screens while at the same time make better use of the additional real estate on larger screens.
So I needed a handy, low overhead way to test bandwidth between myself and a remote server. SCP was handy and was able to give me a relative thumbs up or down, but the overhead distorted the figures. dd and netcat to the rescue.
Open socket on machine "A"
while [ True ]; do nc -v -l 2222 > ddTest; done
Push a file from machine "B" to machine "A"
dd if=/dev/zero bs=50M count=1 | nc <ip of machine A> 2222
Finally got a chance to get down and dirty creating Chef recipes for use in OpsWorks. These are primarily just some notes for myself.
- opsworks-agent-cli allows you to view logs, manually run setup scripts, etc
- not as useful a tool as one might want
- probably its ability to spit out the entire stack config into a json file is its the most useful feature
- the show_log log it presents is in /var/lib/aws/opsworks/chef/ if you’d prefer to tail it directly
- Other logs are in /var/log/aws/opsworks, though I haven’t found those very informative or useful
Kudos to a dzone blog post (link below) for help in testing cookbooks. In short, it was much easier to debug the cookbooks from an instance in a dev OpsWorks env:
- made my edits to the cookbooks on the dev instance
- tested until desired results were achieved
- copied the final changes back to my source
committed source to repo for automation use
cd /opt/aws/opsworks/current/ opsworks-agent-cli get_json > /tmp/attributes.json # edit /tmp/attributes.json as needed bin/chef-solo -c conf/solo.rb -j /tmp/attributes.json -o <cookbook::recipe>
was only able to get "setup" recipes to run, and not "deploy" scripts, when executing a recipe with a "stack command" from the OpsWorks web UI
For next time I forget…
sed -i .bk -E 's/text to replace (text to keep)/replaced text \1/' file
- -i: edit the file rather than just stdout the result. A backup of the original is made using the supplied extension
- -E: support modern regex, as opposed to old or basic regex. Without -E, you’ll have to escape the ()
Each instance of (something) will be treated as a variable which can be called with \#
For next time I forget…
/sbin/mdadm --create /dev/md1 --chunk=256 --level=raid1 --raid-devices=2 /dev/xvdm /dev/xvdn
Maybe I just wasn’t looking in the right place, but I couldn’t find how to set the default font size in Notes.app.
I guess I’ve been out of the email management scene for awhile now; I’m totally behind on DKIM and DMARC. I was introduced to these two technolgies by way of an article posted to ISC Diary by Johannes Ullrich titled "How to send mass e-mail the right way". I’ve seen a couple of these kinds of articles lately, and I’m always interested in what the bulk emailers are up to.
Before I get too much further, let me just break to explain what these technologies are:
- DKIM is a method for servers (not users) to digitally sign emails so that they can be validated by recipients.
- DMARC is essentially the end-user experience. It takes the information provided by DKIM and SPF to determine how an email will be treated
- SPF makes use of DNS records which define which servers are authorized to send email for a given domain.
Coincidentally, I recently upgraded my mail server and it now includes DKIM support. After reading the above article, I decided to take a closer look at DKIM and get my server configured to use it. In a general sense, it’s similar to SPF in that it’s a mechanism utilized at the server level to determine if a given email came from a legitimate source. Like SPF, DKIM is (partially) implemented by using DNS records. However, unlike SPF, DKIM inserts a header in the email when it’s being sent to include the domain’s public key. The recipient server will then compare this signature to the sender’s DNS records and pass or fail the email accordingly, inserting those results into the header as it’s delivered. If you take a look at some emails in your inbox, you may find this information in the full hearders. For instance, an email I sent myself from a Yahoo account has this in the headers:
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1390992156; bh=BWCYkcoQVlLa9vCcxk+HHaO7+yl8AQX4MBV1syoqzRE=; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=ZXgvPU6uXRANhke79swu/qAzvfcbwKVl993ao8TEzOrj/1TX78UQ6vbmKq1aVC48lBJGHcQ2UNrcFmXs3GFXyv6kMZ/Tp3TKi86HeE2RWVEIkEgJ1ihIssBfU0KxTWocHCfaJn9W0uIrfE+gX8rH4vr9ZFeGlH77+xVH5wiUeyY=
My server’s check:
Authentication-Results: dkim=pass (1024-bit key) header.d=yahoo.com; domainkeys=pass (1024-bit key)
Because of email’s distributed nature, adoption by ISPs and vendors is key. According to dmarcian.com, DMARC —and therefore likely DKIM as well— is supported for over 3 billion email users with big names like Facebook, Gmail, and Outlook on board. Indeed, my verification testing show emails coming from my friends with Yahoo and Gmail accounts all having DKIM signatures in their emails.
Gmail (and likely Outlook, etc), with its DMARC support, has supposedly (I don’t have a Gmail account to verify) taken this to the next logical step by presenting the recipient user with a golden key on emails that pass verification. In theory, emails that fail verification would be filtered out in some way. Until DMARC integration is built in to my webmail vendor of choice, I’ve created filters that tag emails that fail DKIM verification, marks them as read, and files them into my "junk" folder.
I just wanted to post a follow-up to my previous post regarding my switch to TracFone. Our experience so far has been very positive. We’ve accomplished the two things we sought: 1. maintain quality and 2. lower costs.
We discovered that the phones we purchased from TracFone are using the same provider we previously had, at least in our home calling area. This was a good thing because we were pleased with the quality and reliability of our previous provider.
It’s taken some time to get over the whole "oh no, am I going to run out of minutes" concern. At first, I felt as though I was constantly checking my remaining minutes and worried to stay on a call too long. This fear has started to subside. We signed up for TracFone’s "value plan"; minutes and days are automatically renewed every month. I signed up for the lowest tier plan (fewest minutes and therefore cheapest overall) and my wife singed up for the highest tier plan as she uses her phone more. We both get more minutes than we’ve been using in a month. Seeing our rollover minutes build up month after month has helped dissipate our worries. The other nice thing about the Value Plans is that, if we build up a huge pile of minutes, we can downgrade to a lower tier Value Plan and save even more money for a few months.
As it stands, we’ve cut our wireless bill almost in half. Over time, by making adjustments to our plans, we should be able to cut it even further.
I’ve been using the static site generator PieCrust for several years now to manage this site and its blog. It’s been really nice having my site in plain, raw HTML files with the bonus of being able to write it all up in Markdown. There’s always been a downside, though: the mechanics of publishing a new post. It’s a process involving rsync, git commits, ssh, or a mixture of these. Contrast this to something like Wordpress with its slick web administration/editor and it feels much more cumbersome than it should be. I’ve always been in search of something that allowed me to start writing blog posts as quickly and easily as possible.
A few months back I stumbled upon myTinyTodo, a very quick and simple web-based task manager. It’s a pretty straight forward app in terms of task management: there’s a field where you type in a task you need to do, push "enter" and it’s added to a list. However, there were a few more features that made it particularly interesting. First, a time stamp is affixed to the task upon creation. Second, it’s the task’s title that you’re writing in the field when you first add a task to the list. When you edit the task, a second field is presented for the addition of expanded content as well as a third field for tags to be associated with the task. Third, an RSS feed can be enabled for each to do list (you can have multiple to do lists, each on its own tab within the app). When you combine these features, you have everything you need to publish a blog. If you wanted to skip any HTML presentation of a blog, just hand out the RSS link for the todo list and you have an instant blogging engine right next to your to do lists. I took this a step further, though, so that I could incorporate it into PieCrust.
- I wrote a PHP script using SimplePie to pull in the RSS feed. This script does a couple of things:
- Checks to see if each task in the feed has already been added as a post to PieCrust. If it has, it will make a diff between the content it’s pulling in and the content that already exists in PieCrust; merge/create as needed.
- Inserts the post’s metadata section at the top of the post’s page (things like author, tags, date published, template to use, etc)
- The script is run via cron every minute. I’ll probably write a custom daemon at some point so that I don’t have to wait a whole minute for cron to run.
- I wrote a custom daemon using inotify so that PieCrust can check to see if anything has changed in its posts directory. In other words, any time the script from step one adds a new post or edits an existing post, inotify will see the change to the "posts" directory and kick off a PieCrust update.
After thinking and searching for awhile, I believe I’ve finally found for me the perfect marriage of simple and portable content presentation with PieCrust and the ease and convenience of web-based content creation with myTinyTodo.
This post was created with myTinyTodo and PieCrust
I often want edit a script with VI and then run it. There’s probably other ways to do this, but this is working for me:
# open file in VI vi /usr/local/bin/my_script.php # make some changes # save the changes esc :w! # put VI in the background ^z # use bash's previous command shortcut with :1 to represent the first argument of the previous command php !!:1 # see results # resume VI for more editing fg