Better Release Management For Distributed Teams: GitFlow

Despite having a number of git interfaces available including a half-decent one built right into phpStorm (my fav for any PHP development ESPECIALLY WordPress) , my go-to tool after 5 years remains SmartGit.   I find the graphic interface far superior to any other git “helper” out there.   The branch visuals and right-click shortcuts no only make me far more efficient than command-line it also has made for far fewer mistakes when managing repositories.    This is especially helpful when trying to see what the other developers are doing on the MySLP SaaS project and trying to coordinate a release.

Recently I decided to upgrade to SmartGit 17.  It has been a few years since my last update and I wanted to check out the new stuff they had.   While the majority of the UX remains the same and gave little reason to update there was one new feature that intrigued me.  “Git Flow”, a new button on the tool bar.    One click later I was knee-deep in the “Git Flow” world.     Less than a week later I’m “going for a swim”.

GitFlow

For those that don’t know, GitFlow tackles something we talked about many times with my prior Cyber Sprocket team – how to manage your git repository branches.    Sure, the master branch is fairly easy.  Nearly every team I know makes that THE “live” branch where your most recent “for public consumption” version lives.   But what about the development cycle?    What about branches for bug fixes?  New features?   What branch is THE “ready for testing” / get this on the staging server branch?    How do you handle hot fixes ; the things that should forego all the overhead of standard branch management/integration/testing because a bug got through and is breaking the product for thousands of customers?

While working with DevriX on this project I quickly adopted their “develop” and “master” branches over my “prerelease” and “master” branches.  It just made more sense.   They may even be following this model already, but we never really talked about it as we were too busy pushing forward to get MySLP launched by January (mission accomplished).

Git Flow , at least as far as I can tell, is quickly becoming a de facto  standard for how to define those branches and what to name them.   Rather than get into the details, I’ll leave you with this link to a decent resource that explains the methodology.  And my typical “cheat sheet” on how it works below.

GitFlow Branches

You’ll want to check out the GitFlow post for visuals on these, it will help.

Feature – as in feature/<some-cool-short-thing-here> are all of the different features, non-emergency bug fixes, etc. that the team works on.  Disciplined coders will have a separate feature branch for each functional area they work on for an upcoming release.

They should always be branched from the latest develop branch.

Develop – a merge of all the finished feature branches that are ready for release.  Developers should “finish” their feature branches and merge them to develop when their branch is stable and is a candidate for release.

Release – the develop branch that is ready for testing to be deployed on the beta (staging) system.   I like to tag these as X.Y.Z-beta-n , though that is not part of the GitFlow model.

Any bug fixes needed to get the system to pass testing goes in here and are merged back into develop.

Master – the main production (live) release, tagged with a version number X.Y.Z, only tracks that latest release branches that passed testing are are going to production.

HotFix – you really screwed something up, but that’s human nature.  A critical bug made it to production.  Create the patch on this branch and merge it back to Master when it passes testing.  Also merge it back to develop to ensure it gets back into your baseline code the dev team is working on.

 

SmartGit and GitFlow

The really cool part about this method and using SmartGit is that I can easily follow this model with one button-click; creating a new feature that is super-easy to integrate into the develop branch later with another single-click.    SmartGit , when using the Git Flow integration, will automatically manage the branches including the merging, deleting (as per your choice) the finished feature branch, and all other overhead of the branch management, even ensuring you’ve pulled the latest develop/release master so you don’t get out of sync.

If you get the entire team using the same model, and possibly even using GitFlow-aware tools it will make it far easier for everyone on the team to understand what is going on.   If you don’t already have a well-established branch management model I suggest you make one, or better yet follow a model other teams may also use like GitFlow.    No matter how big and established your team is, at some point it is nearly inevitable you will end up pairing with others outside your organization.  At the very least it will make on-boarding new developers a lot easier if we all speak the same language.

 

xDebug Remote Debugging With WordPress and phpStorm

I’ve been using phpStorm to do local debugging of my WordPress app on a VVV box for a few years now.   This week I have been running into some challenges with the MySLP SaaS service and while the app works fine on my local setup it is not behaving the same way on the staging deployment out in the cloud.   I need to know why and I’m tired of hard-hacking the code on the staging site with error logs.    Live debugging with code tracing and memory stack access is a far better way to go.

phpStorm has this and I need it.  The problem is my code is running in the cloud and my codebase is local on my laptop.   How to connect to two?    Here are my hints from the deployment.

Local box running MacOS Sierra with the latest 2016.3 phpStorm and a Firefox browser.  Remove box running Ubuntu 16.04 on AWS.    This assumes you have some clue about system admin and local system configuration.

The Setup

On AWS

apt-get install php-xdebug

Edit the php ini file for xdebug that was installed.   I’m on PHP 7 so it is in my PHP 7 config dir under /etc/php/7.0/mods-available/xdebug.ini.

Add some lines to tell xdebug to allow remote connections to your soon-to-be-listening IDE (leave the first line to the zend_extension for xdebug in place):

xdebug.remote_enable=1
xdebug.remote_host=<your local box’s public IP>
xdebug.idekey=<some-text>

Restart your web server (nginx in my case) and if you are using FPM or another PHP accelerator restart that as well.

On Your Local Router

You’ll need to make sure that when your web server sends along the “let me connect to port 9000” your firewall knows which laptop to send it to.    Configure your firewall to port forward port 9000 to local port 9000 on the local IP of your laptop.   This is usually some 192.168 address NOT the public IP you used above.

In phpStorm

Setup your server.   In phpStorm go to the Preferences Menu then look for servers under the PHP entry.

Add  a name for your server.  Set the public URL to the site you are connecting to and the port.

Use path mappings.

Map the SERVER absolute paths to your current project files.  For example, my project file/directory ~/SLP/wp-content/plugins is mapped to the /server/websites/myslp-dashboard-beta/wp-content/plugins directory.

Save that.

Go to the Run / Debug menu and add a new configuration.

Give the config a name and attach it to the server you just setup.

Make sure you set the IDE session key to the same value you put in your xdebug config file.

Firefox Extension

Go to Firefox add ons and install theeasiestxdebug extension.

 

Using It

Start phpStorm and open the debugger with Run | Debug | <your new debug config> and in the code of your project pick a module you want to debug and set a debug break point.

Go to your site in your browser.

Click the new xdebug icon  the Firefox plugin installed in your toolbar to send a message to your web server “start a debug session”.  Load your page and your phpStorm debugger should catch the debug message being sent from your server for that page load.

This xdebug page will help you visualize the communication process and learn about the configuration of PHP and xdebug.

 

 

 

Why Your WordPress Plugin Should Have Almost Nothing In The Main Folder

As we continue to roll out our Store Locator Plus SaaS service built on top of WordPress as our application foundation we continually refine our plugin, theme, and API architecture.    One of the issues I noticed while testing performance and stability is how WordPress Core handles plugins.    Though WordPress caches plugin file headers there are a lot of cases where it re-reads the plugin directories.

What do I mean by “read the plugin directories”?

WordPress has a function named get_plugin_data().   Its purpose is simple.  Read the metadata for a plugin and return it in an array.   This is where things like a plugin name, version, an author come from when you look at the plugins page.

However that “simple” function does some notable things when it comes to file I/O.   For those of you that are not into mid-level computer operations, file I/O is one of the most time consuming operations you can perform in a server based application.   It tends to have minimal caching and is slow even on an SSD drive.   On old-school rotating disks the performance impact can be notable.

So what are those notable things?

It is best described by outlining the process it goes through when called from the get_plugins() function in WordPress Core.

  • Find the WordPress plugins directory (easy and fast)
  • Get the meta for every single file in that directory using PHP readdir and then…
    • skip over any hidden files
    • skip over any files that do not end with .php
    • store every single file name in that directory in an array
  • Now take that list of every single file and do this…
    • if it is not readable, skip it (most will be on most servers so no saving time here)
    • call the WP Core get_plugin_data() method above and store the “answers” in an array , to do THAT, we need to do THIS for all of those files
      • call WP Core get_file_data() which does this..
        • OPEN the file with PHP fopen
        • Read the first 8192 characters
        • CLOSE the file
        • Translate all newline and carriage returns
        • Run WordPress Core apply_filters()
        • Do some array manipulation
        • Do a bunch of regex stuff to match the strings WordPress likes to see in headers like “Plugin Name:” or “Version:” and store the matching strings in an array.
        • Return that array which is the “answers” (plugin metadata) that WordPress is interested in.
    • take that array and store it in the global $wp_plugins variable with the plugin base name as the key to the named array.

In other words it incurs a LOT of overhead for every file that exists in your plugin root directory.

Cache or No Cache

Thankfully viewing a plugin page tends to fetch that data from a cache.   The cache is a string stored in the WP database so a single data fetch and a quick parsing of what is likely a JSON string and you get your plugins page listing fairly quickly.   However caches do expire.

More important to this discussion is the fact that there are a LOT functions in the WordPress admin panel and cron jobs that explicitly skip the cache and update the plugin data.  This runs the entire routine noted above to do that.

Designing Better Plugins

If you care about the performance impact of your plugins on the entire WordPress environment in which it lives, and you SHOULD, then you may want to consider a “minimalist top directory approach” to designing your plugins.

Best Practices on the Plugin Developer Handbook mentions “Folder Structure” and shows an example of having something like this as your plugin file setup:

However they don’t get into the performance details of WHY you should have an includes directory and what goes in there.

In my opinion, EVERYTHING that is not the main plugin-name.php or uninstall.php file should go in the ./includes directory.  Preferably in class files named after the class, but that is a discussion for another blog post.

If possible you may even want to try making plugin-name.php as minimalist as possible with almost no code. Even though the fread in WordPress Core get_file_data() only grabs the first 8192 characters, most of that content is “garbage” that it will not process because it is not part of the /* … */ commentary it is interested in.   If you can get your main plugin-name.php file to be something like 4K because it only includes the header plus a require_once( ‘./includes/main-code-loader.php’); or something similar the memory consumption, regular expression parser and other elements used by get_file_data() will have less work to do.

No matter what your code design, it is going to have some performance impact on WordPress.   My guess is it will be especially notable on sites that have 3,987 plugins installed and are running an “inline” WordPress update.   Ever wonder why that latest version of your premium (not hosted in the WordPress Plugin Directory) plugins don’t show up?   It could be because WordPress spent all the time granted to a single PHP process reading the first 8K of 39,870 files because all those plugins had a dozen-or-so files in the root directory.

Help yourself and help others.  Put the bulk of your plugin code in the includes folder.  The WordPress community will thank you.

 

Hacking Snagit PNG Files For Clarity

While working on some updates to the Store Locator Plus website I ran into a rather annoying issue when it came to the logo for the website.   On the documentation site the logo had excellent clarity on the text beside our Store Locator Plus logo.  When creating the same exact logo in the size I wanted the graphic on the new site was always far less crisp than on the documentation site.    It turns out that Snagit is not very good at rendering OTF fonts to PNG for Retina displays.

The Original

The original image used on that site is from Snagit.   Unfortunately Snagit is still unable to handle Scalable Vector Graphic (SVG) formats so the graphic portion of the logo is a Snagit screen capture from Firefox rendering the SVG on a retina display.   This provides a high resolution graphic that Snagit does a fine job scaling down to my 100 pixel height without a lot of artifacts.  The text does not scale as well so I use the native Open Type font (OTF) file for the Moon typeface and use the text tool option to add the text to the logo.

This is the end result of the standard PNG output from Snagit 4.1.1:

Browser Scaling

As it turns out , using Snagit to scale down the original graphic from the two-times too large size to the size I want for the header doesn’t work as well as I had hoped.   In the past using a graphics application to scale images ALWAYS yielded superior results to in-browser scaling.  It also means faster page loads as an image twice as large scaled by the browser is downloaded something twice as big as it needs to be.    This is much like driving a bus to the grocery store instead of the family sedan.  Not very efficient.

However, the resolution of the scaled images from Snagit does not retain the clarity needed for today’s retina displays.   For those that are not aware, Apple Retina displays and now many 4k displays have more than TWICE the resolution of monitors from a few years ago with the same real estate.   They use internal trickery to make older low-resolution images look right on the screen and absolutely shine when encountering graphics and photos that are shot in much higher resolution.   Images look clearer, better defined, and have deeper more natural looking color palettes.

As it turns out, the web browsers on retina devices tend to do a far better job at scaling images these days than most common graphic apps.    Here is the same image at twice the resolution in both the native size and scaled.

Browser Native 700x160

Browser Native 700×160

Browser Scaled To 350 x 80

Browser Scaled To 350 x 80

Snagit Original PNG 350x80

Snagit Original PNG 350×80

If you are viewing the images on a high resolution screen you should notice that the Browser Scaled image in the middle is better defined than the Snagit image that was sized specifically to the 350×80 dimensions.

PDF Files and DPI

Being the scientist and tech geek that I am I was not going to just use browser scaling and move on to my next project.    It is a disease that impacts my productivity every day, but I had to know a bit more about what was going on and I wanted to get the CORRECT results not a hack that worked “well enough for today”.   I wanted to serve up a 20KB image NOT a 96KB image every time someone loaded my page.   Less bandwidth is still important for all those people using pay-for-what-you-consume mobile plans and is a good idea in general for page loading performance.

I soon discovered that exporting the image from Snagit to a Portable Document Format (PDF) file always kept the image crisp and clear.   This makes sense as a proper PDF format contains graphic images and font data internally with the document.   That means the special font in my image has all of the formulas necessary to draw each letter using math to describe a curve or a line instead of binary on/off pixels arranged in a specific pattern.   I makes for much clearer looking fonts at a variety of resolutions.

It also turns out that from within the Preview application on MacOS you can export that PDF to a PNG file with an added twist that is not available in Snagit.  You can set the dots-per-inch (DPI) of the file being output.    The more dots you have to draw something like a letter “O” the clearer it looks.   Think about it.   And “O” made up of 4 dots doesn’t look much like an “O” though it does make a good diamond-shape.  With 40 dots that “O” resembles a circle.   This meant that I could set the DPI to various levels and see how it looks in the browser.  I ran a series from 72 to 600DPI.  Here is what I got:

Snagit 72dpi PNG Save

PDF Export 72dpi (350×80) No Scaling Necessary

90DPI (437×100) Browser Scaled to 350×80

150dpi Browser Scaled to 350×80

300dpi (1458 x 333) browser scaled to 350×80

600dpi (2916×666) Browser Scaled to 350×80

To get a better idea of what DPI and how it impacts the clarity when drawing circles, click the above images to view the full size non-scaled version.   Zoom in so the images take up the same space on your screen and you see the pixel artifacts hidden in the image that your brain will perceive even if you can’t quite point to it on the original image.

And just for fun , a copy of the 720×160 image converted to pixels instead of an OTF font within Snagit then scaled down to 350×80 for comparison.

Snagit 700×160 Image Versus OTF Scaling to 350×80

Snagit OTF Saved As Native 350×80 Image

Summary

There are a few conclusions to be made from this exercise.

  • SVG formats need to be more widely accepted.
    They are far superior for scaling and rendering of images.   This logo and font would look much nicer and only require a single mid-size file to look great on ALL screen resolutions and ANY size.

    • Snagit needs to support SVG reading and writing.
    • WordPress needs to allow SVG formats anywhere PNG, GIF, and JPEG images are allowed.
  • The native output DPI for PNG files from Snagit is 72.
    This makes sense as the “web standard” for screen output has always been 72DPI. Printed media was 300DPI then 600DPI then 1200DPI.

    • Screen media standards need to be updated for the high pixel densities found on today’s devices.
  • Browser scaling algorithms on high resolution displays is currently better than both the Preview and Snagit apps I used in these tests.

This is by no means a complaint about Snagit or any of the other applications noted here.  In fact I LOVE Snagit and use it as my go-to for creating instructional videos and images for online documentation.    Hopefully these findings help you determine the best solution for your image management so you can balance performance and the user experience for your web and mobile apps.

 

Save Money With RDS / Get Your gSuite Mail Forwarding

Two quick hacks I learned today while doing some general “tech life” maintenance.

Amazon Web Services – Saving Money

The first “hack” is an easy one that I am now kicking myself in my own ass for not picking upon 6 months ago.    This is a $600 oversight that is a LOT of beers-worth of savings.   The trick is simple…  RESERVE INSTANCES.    Especially with RDS.

It turns out that for RDS (and possibly EC2 and other instance types) you can “purchase” a No Upfront Costs reserved instance.   The cost?   FREE.     The savings can be substantial.  For an M4.Large size instance it can be as much as $80/month!

I have been running a M4.Large sized RDS instances for months.    The hourly rate is $0.350.    That runs $3066 per year or about $255/month.

By purchasing a reserved M4.Large instance for 1 year with no upfront fee the rate drops to $0.241/hour or $2,111 annually.   That is $175/month.    A quick $80 for clicking a few buttons and pressing submit.

In my case I opted for the partial up front payment.   In this case you pay $648 today as a one-time charge for that M4.Large reservation for 1 year.     The hourly rate drops to $0.132/hour , $1,156/year or about $150/month.    That is $105 less than what I was paying.  Sweet!

gSuite Email Forwarding

I have been using gSuite / Google for Work / Google for Business or whatever other of a half-dozen names they’ve used in the past 10 years for… well … about 10 years now.     One of the key features I use regularly is the email routing which allows a single domain-wide email address to be forwarded to both in-company gSuite and non-Google users (such as Yahoo! email addresses).     It makes it easy to give customers (or my son’s tennis team) a single short email address that goes to multiple people.

It turns out that the forwarding rules can be written as regular expressions.  Add a gSuite email route for skunkworks@yourdomain.com and set recipients to a half-dozen different email addresses and anyone that sends email to skunkworks@ will get broadcast to the team.

It turns out that the regular expressions allow you to do some pretty cool things like send email coming in to skunkworks@(yourdomain|mydomain).com to hit that same group of people whether  the person sends to yourdomain.com or mydomain.com on the email.

Pretty cool.

But … this system happens to break a common tenant of nearly every major email system in existence.  It is CASE SENSITIVE.

What?!?!

Yup.   Email to Skunkworks@ will NOT go anywhere and the user will get an “undeliverable” message for everyone not on the Google family of services.

Well it turns out a simple hack can fix that.    add ^(?i) at the start of the expression to make it case insensitive.

 

%d bloggers like this: