Automated Web App Testing With phpStorm

Selenium IDE was a great way to handle automated web app testing like the Store Locator Plus plugins for WordPress.    Selenium IDE is a simple script recorder and playback too that runs on Firefox.    Or, I should say, it used to run on Firefox.  That broke in 2017 when Firefox 52 came out.

After a lot of research I finally found a viable alternative to Selenium IDE that will work with modern browsers.  It is also free, locally installed, and open source. All winning attributes.  Paying for services is not much of an issue so the free part is not a requirement just a “that’s nice” feature.

Web app testing services

I tried several paid alternatives including Browserstack — a paid monthly service that runs virtual desktops and mobile device simulations hosting various browsers. Having to connect to a remote server via proxies or tunnels is a pain.    It also means no testing when offline or when the network is unreliable.    Having multiple browsers is great but 90% of the testing that needs to happen is base functionality which is the same across browsers.    Modern browser are also very good at testing mobile with browser like Safari going beyond simple screen sizing in their mimic of IOS, for example.

Other alternatives included several locally installed proprietary test frameworks.   Nearly every one of them ranges from mediocre to downright horrid.    This is clearly an industry stuck in the 1990s mindset of application development — from the start where you have to fill out a form with all your contact info to be “allowed” to demo the product (and be later harassed by sales people) to the 1980s desktop-centric interfaces.    Many did not work on MacOS.   Those that worked were heavy, bloated, and had a steep learning curve.    Does nobody integrate with my phpStorm, my web app IDE?

It just so happens that the best local testing suite today happens to be free.

The winner?   Selenium Webdriver with a few libraries like WebDriverIO + Mocha + Chai to make things easy.
Read More

Getting Foundation, Sass, and phpStorm To Play Together

In an effort to improve the overall user experience inStore Locator Plus™, my business locator plugin for WordPress, I have been playing with pre-defined interface frameworks.   I’ve been working with Vue recently and like their pre-defined templates.  However, the MySLP version of Store Locator Plus has been built using Foundation and React.    I decided it would be a good idea to stay with Foundation to help style my interfaces in the WordPress SLP product.

I already have phpStorm setup for my WordPress development and use Sass to compile my own CSS libraries.   I wanted to add Foundation to the mix, but it turned out to take longer than I anticipated.  With any luck my cheat-sheet here may save you some time.
Read More

PHP Autoload and Singleton(ish) Model

Once every couple of years I take a month off from hacking away at the Store Locator Plus products and delve into some personal projects.  It is a way to learn some new things and try out new techniques without breaking the locator product.   With the locator being my primary source of income these days it is important to keep that intact while “trying new things”.

Some of the things I’ve been working on this week include Backbone, Bootstrap, React, Underscores, some new REST techniques, and a simplified object model for my internal “who cares about backward compatibility” PHP 7 projects.

One of the things I started playing with was autoload functionality.  Yes, it has been around since PHP 5, but it had a major overhaul in 7 and as such I think it is ready for production use.    Here are some things I learned along the way.


If you employ autoloading classes use a singleton-style model.

Autoloading makes it tempting to litter your code with $obj=new Thing(); $obj->do_it();   and later in another function do a similar operation with $obj2=new Thing(); $obj->do_something_else();.

If you don’t use a singleton-style model you are spewing pointers all over the place and chewing up small chunks of memory.   Those pointers can be a few-hundred bytes to several Kb.  Memory usage is less of a concern these days but  allocation and de-allocation (garbage collection) takes time.    These days performance is a critical design feature of any web or mobile app.

The Self-Managed Loader

In some of my older PHP code I use a self-managed loading mechanism.  Throughout the code I use a require_once( “… class file …”) call.    The PHP file uses the class_exists() function to wrap the entire class as well as a small snippet of code that checks a “global list of instatiated object pointers”.   Before you start going off about “God objects”, that is not  how this architecture works.   The list is an array of object pointers I can test to see if a class is already instantiated and if it is I use that pointer.   The objects do not require the main list to function and can live independently as a truly independent object in most cases. I know with my application architecture that each class that uses that method should only EVER be employed once so I use the “object list” to enforce that.

However I didn’t like all of the require_once() calls littering my code.   It slightly more code clutter and my guess is that checking the file resources table in PHP on every one of those calls is likely far less efficient than checking a symbol table (or PHPs version of one).

And example of my “require registration” code…

Every single class has the class_exists() test wrapper and the global $slplus with the add_object() call.   Inside the SLPlus class I have some utilities to simplify property names and use a __get() magic method so I can reference $slplus->Text to get the single instantiation of the SLP_Text object.


PHP 5 had an autoload function whenever you tried to instantiate a class but forgot to require the PHP class definition file.  You could write a function that said “go look in this directory and require the source”.    PHP 7 reworked this function and gave it a new name.  It is more intelligent about autoloading, to the point where the older autoload() is deprecated in PHP 7.2.    I’m going to only refer to the new”ish” spl_autoload_register() methods here.   The ramifications on HOW you implement this (the Singleton thing mentioned above) are the same regardless.

As noted, autoloading is a way for PHP to perform a last-ditch effort to keep code from crashing when you reference a class but forgot to include the source explicity in your code.    It is a great way to reduce code clutter.    The simple version is you write a function that takes the class name that is missing and tells PHP how to go about finding where that file might be in your source architecture, then usually requiring it to avoid a crash.

Here is the overview from my new RUIn (Are You In?  or “Ruin”) side project.   This is a base class for a custom WordPress Theme I am building for this project.    It does use the class test only because I need at least one global object to reference for other functionality in my theme.  It will also be helpful for the singleton-style model I will describe later.

In my projects I make sure all of my PHP file names match the class name they contain.  This is a general best practice that many PHP developers employ.  It makes your life a LOT easier as you can do some neat code tricks like you see here.   It also means moder IDEs , including my favorite phpStorm, can be far more intelligent about your code (auto-complete, grunt and npm task managers, syntax checkers, code smell, and a lot of other tools need far less wrangling).   The means I can use something as simple as the following code to instantiate my objects when needed.


Now anywhere in my code I need a new instantiation of an object I can call $obj = new RUInTheme_Thingy(); and it magically includes the source.   My architecture dictates all modules go in the /inc/module/ directory and start with RUInTheme_ if they are part of this application.     Nice.  No more requires throughout my code.

Here is an extended example that loads up WordPress admin functions only when the user is a logged in user looking at backend admin pages or loads UI functions when someone is viewing a front-end page.   It means that the app is not loading memory up with code that is never going to be executed such as UI features that aren’t used on an admin page and vice-versa.


Now to extend my application I only need to add a class and matching file name in the inc/module/ directory and use new ClassName() in my code. Sweet!

The Un-Singletons

Now what about those singletons and the memory issue?

I quickly discovered that having super-clean readable code was cool but I was pissing memory all over the floor and mucking up my server.   That’s fine for my local Vagrant box (thought it was breathing heavy when hammering the app) but on a production server that would not be a nice “feature”.    Here is what I was doing as I didn’t need to reference the object just employ the methods:

Functionally this was fine.  The constructor setup the proper WordPress hooks and the methods did their magic whenever I needed them to.   Great clean clode.   Code that happens to piss memory all over the floor just for fun.

No, the memory leak was not a huge issue.  A small 320 byte pointer that was never referenced.  In reality the memory was not a leak as the architecture only calls these objects once and they are never re-used.  Whether or not I assigned the new object to a variable the memory consumption would not change.    Not a big deal until you start getting into larger more complex apps.

The problem with larger apps is you will likely end up with a class that has an object that is polymorphic, or has independent properties for each element of a list of objects, or some other recursive deployment.     In a very useless example, you would end up with dozens of 300-700 byte pointers hanging around a give PHP a lot more clean-up work with something like this:

While this is an over-simplified example that can be optimized in many ways you can imagine this being part of multiple independent classes such as an AJAX or REST processor that can not be certain the Theme_Admin() object has been invoked and does not have public properties to share the instantiation.    For these to be truly independent objects they each must employ their own pointers to the object.   Luckily PHP 7 is smart enough to handle the object definition itself in shared memory , signficantly reducing memory load and increasing performance, but there is still a lot of excess overhead with this model.

There is a better way.

A Singleton Autoloaded Model

Use a PHP class version of a singleton by employing a static instance property in your class.    Add it to a public method and instead of using a new call to deploy an object you use your instance manager method.     This ensures all of your PHP references that use this model get back the same memory pointer.    Less overhead for PHP overall and with a large or highly recursive application a lot less memory usage.

Since I employ this method for nearly all of my classes in my application I create a base object that each of my auto-loaded classes will extend.    To invoke a new object I use the static call to the class with something like Theme_Admin::get_instance().   Much cleaner than a long require_once() with a path.

Here is the full example:





The Object List

You may have noticed I still have a __get() method in the main class as well as the objects property.    Even with the singleton method and autoloading I find it far easier to reference my instantiated objects from auto-derived properties on the main object.    This way I know that all of my primary utilities can be referenced via $MainObj->ClassNameWithoutBase.

I can also be confident that since I’m using the Singleton style implementation I can also call my get_instance() for the classes I need whenever I am starting a logic path and later reference the methods.

In my example below I do manually define global $RUInTheme and provide a @var code hint.  This is so phpStorm auto-completes all my method calls and type-checks parameters in real-time (far fewer bugs, yay!).  I could get away with cleaner code by referencing $GLOBALS[‘RUInTheme’]-><classname>->method() but I opt for faster development and syntax checking over saving a line of text.

Here is the UI code from RUInTheme that makes use of the Singleton which auto-registers itself on the RUInTheme base object using a shorthand (dropping the RUInTheme_ prefix I use on all classses in this project).



That is the guts of a fully functional WordPress theme with custom shortcodes, a Backbone processor, and is easily extensible with self-checking code (thanks to phpStorms) and object loading.     It is memory efficient in the new responsive JavaScript app being driven by the WordPress REST API.

No, it is not the perfect model, but it is a smarter model for the architecture I am deploying.    Some of this may even find its way back into Store Locator Plus and some of the other projects I have online ; at least the PHP 5.2 compatible parts for my WordPress projects.   Maybe someday WordPress will bump the requirement to PHP 5.6 and I can make more use of some super-cool PHP tricks.

Adopting GitFlow As My Branching Model

A couple of months ago I noticed SmartGit, my preferred git management tool, had a  GitFlow button in the toolbar when I updated to the latest version.   Curious I decided to explore and a month later I was using GitFlow as my branching model for most of my Store Locator Plus code repositories.  While there are lot of arguments for and against the GitFlow model, just like any other code-related topic these days, I figure using a widely-accepted model was better than continuing with my own ad-hoc branching model.    At the very least I can tell new developers on the team “we use GitFlow” and they can “Google it” to learn more.
SmartGit GitFlow Button 2017-05-23_08-43-57
 GitFlow makes it very easy to start a feature/hotfix/support update and merge it back into the develop (our previous prerelease) branch as well as publishing releases and ensuring the updates are ported to master and develop.   It is even easier with SmartGit as I can click a buton to start a feature and click it again whent he feature is done.    Same for starting and deploying releases.    Anything I can do with one mouse click versus a dozen or two keystroke-click combinations is an efficiency gain.   I prefer being efficient whenever possible.
Below is my summary of how I view the model and how I’m using it.

The Short Notes On Branches

develop = completed stuff ready for other dev (was our prerelease)
master = the final production version
features = a new feature/patch that will go into a later develop release.  When you “finish” a feature using a GitFlow tool it will auto-merge to develop.  Fast forward or merge commit style depends on how you configure your GitFlow tools.
release = a new release candidate branch.   When you think develop is ready you bump versions in the product and start a release branch.   This tells developers “this is in testing and we think it is ready for production”.
There are other branches as well but I rarely use them.  They are the hotfix and support branches which you can read about on a lot of other GitFlow blogs.    hotfix bypasses the feature/develop/release merging-and-overhead and applies a code branch directly to master when finished, which then also pushed back to develop.

A Simplified Work Flow

Your existing codebase has a master branch in production.   Your develop branch is aligned and ready for new work.

You start a new feature or patch.   You create a feature branch usually named feature/super-cool-new-thing.   When you’ve completed the code and it looks stable you finish the feature.    GitFlow will merge this with your local develop.     No conflicts or other issues?  Push it.   The rest of your dev team has a new reference point for your upcoming release.

Start a release when you are done coming up with super cool new features.    When you start a release GitFlow will create a new branch named release/<version>.   If you’ve had prior release branches and tags GitFlow will see this and bump the next minor or point release.  If not you’ll need to name your first release.  Convention is to use the major.minor.point-release version format.

Put the release into production after testing.   When you finish a release, GitFlow will merge this with master, tag the commit with the version of that release, and make sure the develop branch is updated with any code changes that may have happened during testing/production.

Seems simple, but I quickly found a corner case that did not seem to be addressed.  What do you do when your release branch testing uncovers a bug?

My patch Branch

One of the complaints about GitFlow is what do you do when your code is ready for testing and find a bug.   The branch model does not clearly define the methodology and in this vacuum a lot of people have shared their opinions and many have tried to fill that void.  There were no methodologies that I came across that I really liked, so I created my own (which I’m sure someone else has done already).

After I start a new release branch I run through a series of tests.    Sometimes the tests fail and I need to update the code.    What do you do now?  Start a new feature branch?  It is not really a feature.  It also makes it very hard to see what “feature” is needed to get the release into production versus a feature you (or some other developer) has started for a different future release.    The bigger the team the more problematic this becomes.

In order to increase visibility for the team lead that is managing the repository I introduced the “patch” branch.  Patch branches in my branch model are specifically for code changes to be rolled directly to the release (and back to develop) branches.    It is the equivalent of a hotfix that goes on release instead of master.

While this is not a standard GitFlow model and thus I don’t have a one-button-click way to start and finish a patch release in SmartGit, the nomenclature “standard” keeps us all on the same page.

Better Release Management For Distributed Teams: GitFlow

Despite having a number of git interfaces available including a half-decent one built right into phpStorm (my fav for any PHP development ESPECIALLY WordPress) , my go-to tool after 5 years remains SmartGit.   I find the graphic interface far superior to any other git “helper” out there.   The branch visuals and right-click shortcuts no only make me far more efficient than command-line it also has made for far fewer mistakes when managing repositories.    This is especially helpful when trying to see what the other developers are doing on the MySLP SaaS project and trying to coordinate a release.

Recently I decided to upgrade to SmartGit 17.  It has been a few years since my last update and I wanted to check out the new stuff they had.   While the majority of the UX remains the same and gave little reason to update there was one new feature that intrigued me.  “Git Flow”, a new button on the tool bar.    One click later I was knee-deep in the “Git Flow” world.     Less than a week later I’m “going for a swim”.


For those that don’t know, GitFlow tackles something we talked about many times with my prior Cyber Sprocket team – how to manage your git repository branches.    Sure, the master branch is fairly easy.  Nearly every team I know makes that THE “live” branch where your most recent “for public consumption” version lives.   But what about the development cycle?    What about branches for bug fixes?  New features?   What branch is THE “ready for testing” / get this on the staging server branch?    How do you handle hot fixes ; the things that should forego all the overhead of standard branch management/integration/testing because a bug got through and is breaking the product for thousands of customers?

While working with DevriX on this project I quickly adopted their “develop” and “master” branches over my “prerelease” and “master” branches.  It just made more sense.   They may even be following this model already, but we never really talked about it as we were too busy pushing forward to get MySLP launched by January (mission accomplished).

Git Flow , at least as far as I can tell, is quickly becoming a de facto  standard for how to define those branches and what to name them.   Rather than get into the details, I’ll leave you with this link to a decent resource that explains the methodology.  And my typical “cheat sheet” on how it works below.

GitFlow Branches

You’ll want to check out the GitFlow post for visuals on these, it will help.

Feature – as in feature/<some-cool-short-thing-here> are all of the different features, non-emergency bug fixes, etc. that the team works on.  Disciplined coders will have a separate feature branch for each functional area they work on for an upcoming release.

They should always be branched from the latest develop branch.

Develop – a merge of all the finished feature branches that are ready for release.  Developers should “finish” their feature branches and merge them to develop when their branch is stable and is a candidate for release.

Release – the develop branch that is ready for testing to be deployed on the beta (staging) system.   I like to tag these as X.Y.Z-beta-n , though that is not part of the GitFlow model.

Any bug fixes needed to get the system to pass testing goes in here and are merged back into develop.

Master – the main production (live) release, tagged with a version number X.Y.Z, only tracks that latest release branches that passed testing are are going to production.

HotFix – you really screwed something up, but that’s human nature.  A critical bug made it to production.  Create the patch on this branch and merge it back to Master when it passes testing.  Also merge it back to develop to ensure it gets back into your baseline code the dev team is working on.


SmartGit and GitFlow

The really cool part about this method and using SmartGit is that I can easily follow this model with one button-click; creating a new feature that is super-easy to integrate into the develop branch later with another single-click.    SmartGit , when using the Git Flow integration, will automatically manage the branches including the merging, deleting (as per your choice) the finished feature branch, and all other overhead of the branch management, even ensuring you’ve pulled the latest develop/release master so you don’t get out of sync.

If you get the entire team using the same model, and possibly even using GitFlow-aware tools it will make it far easier for everyone on the team to understand what is going on.   If you don’t already have a well-established branch management model I suggest you make one, or better yet follow a model other teams may also use like GitFlow.    No matter how big and established your team is, at some point it is nearly inevitable you will end up pairing with others outside your organization.  At the very least it will make on-boarding new developers a lot easier if we all speak the same language.


%d bloggers like this: