PHP Autoload and Singleton(ish) Model

Once every couple of years I take a month off from hacking away at the Store Locator Plus products and delve into some personal projects.  It is a way to learn some new things and try out new techniques without breaking the locator product.   With the locator being my primary source of income these days it is important to keep that intact while “trying new things”.

Some of the things I’ve been working on this week include Backbone, Bootstrap, React, Underscores, some new REST techniques, and a simplified object model for my internal “who cares about backward compatibility” PHP 7 projects.

One of the things I started playing with was autoload functionality.  Yes, it has been around since PHP 5, but it had a major overhaul in 7 and as such I think it is ready for production use.    Here are some things I learned along the way.

<tl;dr/>

If you employ autoloading classes use a singleton-style model.

Autoloading makes it tempting to litter your code with $obj=new Thing(); $obj->do_it();   and later in another function do a similar operation with $obj2=new Thing(); $obj->do_something_else();.

If you don’t use a singleton-style model you are spewing pointers all over the place and chewing up small chunks of memory.   Those pointers can be a few-hundred bytes to several Kb.  Memory usage is less of a concern these days but  allocation and de-allocation (garbage collection) takes time.    These days performance is a critical design feature of any web or mobile app.

The Self-Managed Loader

In some of my older PHP code I use a self-managed loading mechanism.  Throughout the code I use a require_once( “… class file …”) call.    The PHP file uses the class_exists() function to wrap the entire class as well as a small snippet of code that checks a “global list of instatiated object pointers”.   Before you start going off about “God objects”, that is not  how this architecture works.   The list is an array of object pointers I can test to see if a class is already instantiated and if it is I use that pointer.   The objects do not require the main list to function and can live independently as a truly independent object in most cases. I know with my application architecture that each class that uses that method should only EVER be employed once so I use the “object list” to enforce that.

However I didn’t like all of the require_once() calls littering my code.   It slightly more code clutter and my guess is that checking the file resources table in PHP on every one of those calls is likely far less efficient than checking a symbol table (or PHPs version of one).

And example of my “require registration” code…

Every single class has the class_exists() test wrapper and the global $slplus with the add_object() call.   Inside the SLPlus class I have some utilities to simplify property names and use a __get() magic method so I can reference $slplus->Text to get the single instantiation of the SLP_Text object.

Autoloading

PHP 5 had an autoload function whenever you tried to instantiate a class but forgot to require the PHP class definition file.  You could write a function that said “go look in this directory and require the source”.    PHP 7 reworked this function and gave it a new name.  It is more intelligent about autoloading, to the point where the older autoload() is deprecated in PHP 7.2.    I’m going to only refer to the new”ish” spl_autoload_register() methods here.   The ramifications on HOW you implement this (the Singleton thing mentioned above) are the same regardless.

As noted, autoloading is a way for PHP to perform a last-ditch effort to keep code from crashing when you reference a class but forgot to include the source explicity in your code.    It is a great way to reduce code clutter.    The simple version is you write a function that takes the class name that is missing and tells PHP how to go about finding where that file might be in your source architecture, then usually requiring it to avoid a crash.

Here is the overview from my new RUIn (Are You In?  or “Ruin”) side project.   This is a base class for a custom WordPress Theme I am building for this project.    It does use the class test only because I need at least one global object to reference for other functionality in my theme.  It will also be helpful for the singleton-style model I will describe later.

In my projects I make sure all of my PHP file names match the class name they contain.  This is a general best practice that many PHP developers employ.  It makes your life a LOT easier as you can do some neat code tricks like you see here.   It also means moder IDEs , including my favorite phpStorm, can be far more intelligent about your code (auto-complete, grunt and npm task managers, syntax checkers, code smell, and a lot of other tools need far less wrangling).   The means I can use something as simple as the following code to instantiate my objects when needed.

 

Now anywhere in my code I need a new instantiation of an object I can call $obj = new RUInTheme_Thingy(); and it magically includes the source.   My architecture dictates all modules go in the /inc/module/ directory and start with RUInTheme_ if they are part of this application.     Nice.  No more requires throughout my code.

Here is an extended example that loads up WordPress admin functions only when the user is a logged in user looking at backend admin pages or loads UI functions when someone is viewing a front-end page.   It means that the app is not loading memory up with code that is never going to be executed such as UI features that aren’t used on an admin page and vice-versa.

 

Now to extend my application I only need to add a class and matching file name in the inc/module/ directory and use new ClassName() in my code. Sweet!

The Un-Singletons

Now what about those singletons and the memory issue?

I quickly discovered that having super-clean readable code was cool but I was pissing memory all over the floor and mucking up my server.   That’s fine for my local Vagrant box (thought it was breathing heavy when hammering the app) but on a production server that would not be a nice “feature”.    Here is what I was doing as I didn’t need to reference the object just employ the methods:

Functionally this was fine.  The constructor setup the proper WordPress hooks and the methods did their magic whenever I needed them to.   Great clean clode.   Code that happens to piss memory all over the floor just for fun.

No, the memory leak was not a huge issue.  A small 320 byte pointer that was never referenced.  In reality the memory was not a leak as the architecture only calls these objects once and they are never re-used.  Whether or not I assigned the new object to a variable the memory consumption would not change.    Not a big deal until you start getting into larger more complex apps.

The problem with larger apps is you will likely end up with a class that has an object that is polymorphic, or has independent properties for each element of a list of objects, or some other recursive deployment.     In a very useless example, you would end up with dozens of 300-700 byte pointers hanging around a give PHP a lot more clean-up work with something like this:

While this is an over-simplified example that can be optimized in many ways you can imagine this being part of multiple independent classes such as an AJAX or REST processor that can not be certain the Theme_Admin() object has been invoked and does not have public properties to share the instantiation.    For these to be truly independent objects they each must employ their own pointers to the object.   Luckily PHP 7 is smart enough to handle the object definition itself in shared memory , signficantly reducing memory load and increasing performance, but there is still a lot of excess overhead with this model.

There is a better way.

A Singleton Autoloaded Model

Use a PHP class version of a singleton by employing a static instance property in your class.    Add it to a public method and instead of using a new call to deploy an object you use your instance manager method.     This ensures all of your PHP references that use this model get back the same memory pointer.    Less overhead for PHP overall and with a large or highly recursive application a lot less memory usage.

Since I employ this method for nearly all of my classes in my application I create a base object that each of my auto-loaded classes will extend.    To invoke a new object I use the static call to the class with something like Theme_Admin::get_instance().   Much cleaner than a long require_once() with a path.

Here is the full example:

RUInTheme_Object

 

RUInTheme

RUInTheme_Admin

The Object List

You may have noticed I still have a __get() method in the main class as well as the objects property.    Even with the singleton method and autoloading I find it far easier to reference my instantiated objects from auto-derived properties on the main object.    This way I know that all of my primary utilities can be referenced via $MainObj->ClassNameWithoutBase.

I can also be confident that since I’m using the Singleton style implementation I can also call my get_instance() for the classes I need whenever I am starting a logic path and later reference the methods.

In my example below I do manually define global $RUInTheme and provide a @var code hint.  This is so phpStorm auto-completes all my method calls and type-checks parameters in real-time (far fewer bugs, yay!).  I could get away with cleaner code by referencing $GLOBALS[‘RUInTheme’]-><classname>->method() but I opt for faster development and syntax checking over saving a line of text.

Here is the UI code from RUInTheme that makes use of the Singleton which auto-registers itself on the RUInTheme base object using a shorthand (dropping the RUInTheme_ prefix I use on all classses in this project).

RUInTheme_UserInterface.php

RUInTheme_StartARUIn_Shortcode

That is the guts of a fully functional WordPress theme with custom shortcodes, a Backbone processor, and is easily extensible with self-checking code (thanks to phpStorms) and object loading.     It is memory efficient in the new responsive JavaScript app being driven by the WordPress REST API.

No, it is not the perfect model, but it is a smarter model for the architecture I am deploying.    Some of this may even find its way back into Store Locator Plus and some of the other projects I have online ; at least the PHP 5.2 compatible parts for my WordPress projects.   Maybe someday WordPress will bump the requirement to PHP 5.6 and I can make more use of some super-cool PHP tricks.

Selenium IDE Rollups With StoredVars Logic

Creating rollups in Selenium IDE with execution logic based on storedVars can be tricky.   Storing the value of an element or its presence within the rollup command list and using if or other logic blocks via the various “flow” add ins will not work as you expect.   Nor will storing the values and then using JavaScript logic within your rollup.     Getting storedVars working in a rollup is a special kind of black magic.

Here are my methods that are working for me with my home-grown “Go With The Flow” add-in I brewed based on similar control flow plugins.

Set Vars In A Separate Rollup

I find that setting my variables in a separate rollup is a simple solution.   This ensures your variable stack exists and is set using standard Selenium IDE commands.   My example for testing if I am already logged in before prompting for user and password data and running the login:

Use JavaScript Logic

Once your variables are set they are standard JavaScript variables that will be available throughout your rollups.  You can use this fact to build standard JavaScript logic in your rollup and decide what commands the rollup will push on the stack during each execution.   In your Selenium IDE command list you can then change storedVars using various store commands and then call the rollup afterwards; in essence changing what commands are executed “on the fly”.

You’ll notice that I also do a quick check at the top to ensure the set_my_vars rollup was already run by checking that the storedVars key I need is set.   If not it echos a message to the Selenium IDE log console and skips doing anything else.

 

Hopefully these tricks will help you with your Selenium IDE rollup builds.    Some things to keep in mind is that things like Selenium labels or endif markers do not exist within the rollup itself if you are creating them in the rollup.    Same concept with storedVars, the storedVars[<key>] will not exists and be available to your rollup JavaScript logic if you create the var inside the rollup itself.      When a rollup that creates a storedVars entry is complete it is then available for any future commands, including rollups.

Adopting GitFlow As My Branching Model

A couple of months ago I noticed SmartGit, my preferred git management tool, had a  GitFlow button in the toolbar when I updated to the latest version.   Curious I decided to explore and a month later I was using GitFlow as my branching model for most of my Store Locator Plus code repositories.  While there are lot of arguments for and against the GitFlow model, just like any other code-related topic these days, I figure using a widely-accepted model was better than continuing with my own ad-hoc branching model.    At the very least I can tell new developers on the team “we use GitFlow” and they can “Google it” to learn more.
SmartGit GitFlow Button 2017-05-23_08-43-57
 GitFlow makes it very easy to start a feature/hotfix/support update and merge it back into the develop (our previous prerelease) branch as well as publishing releases and ensuring the updates are ported to master and develop.   It is even easier with SmartGit as I can click a buton to start a feature and click it again whent he feature is done.    Same for starting and deploying releases.    Anything I can do with one mouse click versus a dozen or two keystroke-click combinations is an efficiency gain.   I prefer being efficient whenever possible.
Below is my summary of how I view the model and how I’m using it.

The Short Notes On Branches

develop = completed stuff ready for other dev (was our prerelease)
master = the final production version
features = a new feature/patch that will go into a later develop release.  When you “finish” a feature using a GitFlow tool it will auto-merge to develop.  Fast forward or merge commit style depends on how you configure your GitFlow tools.
release = a new release candidate branch.   When you think develop is ready you bump versions in the product and start a release branch.   This tells developers “this is in testing and we think it is ready for production”.
There are other branches as well but I rarely use them.  They are the hotfix and support branches which you can read about on a lot of other GitFlow blogs.    hotfix bypasses the feature/develop/release merging-and-overhead and applies a code branch directly to master when finished, which then also pushed back to develop.

A Simplified Work Flow

Your existing codebase has a master branch in production.   Your develop branch is aligned and ready for new work.

You start a new feature or patch.   You create a feature branch usually named feature/super-cool-new-thing.   When you’ve completed the code and it looks stable you finish the feature.    GitFlow will merge this with your local develop.     No conflicts or other issues?  Push it.   The rest of your dev team has a new reference point for your upcoming release.

Start a release when you are done coming up with super cool new features.    When you start a release GitFlow will create a new branch named release/<version>.   If you’ve had prior release branches and tags GitFlow will see this and bump the next minor or point release.  If not you’ll need to name your first release.  Convention is to use the major.minor.point-release version format.

Put the release into production after testing.   When you finish a release, GitFlow will merge this with master, tag the commit with the version of that release, and make sure the develop branch is updated with any code changes that may have happened during testing/production.

Seems simple, but I quickly found a corner case that did not seem to be addressed.  What do you do when your release branch testing uncovers a bug?

My patch Branch

One of the complaints about GitFlow is what do you do when your code is ready for testing and find a bug.   The branch model does not clearly define the methodology and in this vacuum a lot of people have shared their opinions and many have tried to fill that void.  There were no methodologies that I came across that I really liked, so I created my own (which I’m sure someone else has done already).

After I start a new release branch I run through a series of tests.    Sometimes the tests fail and I need to update the code.    What do you do now?  Start a new feature branch?  It is not really a feature.  It also makes it very hard to see what “feature” is needed to get the release into production versus a feature you (or some other developer) has started for a different future release.    The bigger the team the more problematic this becomes.

In order to increase visibility for the team lead that is managing the repository I introduced the “patch” branch.  Patch branches in my branch model are specifically for code changes to be rolled directly to the release (and back to develop) branches.    It is the equivalent of a hotfix that goes on release instead of master.

While this is not a standard GitFlow model and thus I don’t have a one-button-click way to start and finish a patch release in SmartGit, the nomenclature “standard” keeps us all on the same page.

WordPress As An Application Framework

A summary of my notes from my SyntaxCon 2017 “WordPress As An Application Framework” presentation.

Constraints

Constraints of working in and around the WordPress application.

Built-In Overhead

Using any framework adds overhead to your application.  It is the trade-off for rapid development with well-vetted components.   The bigger the framework the more overhead.

WordPress did not originate with the intent to become an application framework.  In addition, Automattic continues to be driven by Matt’s what some would consider an over-zealous desire to maintain backwards compatibility with prior releases of the core product.   Along with those attributes comes some overhead you may not see in other frameworks.

WordPress Heartbeat

The “heartbeat” is one example.   WordPress is configured to fire off a “heartbeat” every 60 seconds.   When the heartbeat process executes it loads the entire WordPress application and runs configured “heartbeat tasks”.   That means all plugins and themes are loaded and executed, often doing nothing.   Depending what you have running on your site that can be a lot of overhead.    Most plugins and themes do not “short circuit” when the heartbeat happens despite the fact that they do not do anything to process or influence “heartbeat tasks”.   That means more overhead do literally do nothing.

On a positive note, one of the advantages of writing your own application on top of WordPress is that you are controlling the environment.  You can craft your own plugins and theme to short circuit if that piece of your app is not doing “heartbeat stuff”.   In the top of most of my plugins you will find the following line which basically says “don’t load up anything else in this plugin when the heartbeat comes in” saving file I/O and server memory.

 

Benefits

Benefits of having pre-defined libraries and UI components.

Time Savings

Time savings at each phase of development.

Flexibility

Flexibility of WordPress as a foundation.

Extensibility

Extensibility of WordPress both internally and via exposed services.

Scalability

Scalability of the application.

Selenium IDE Rollups With Arguments

As I prepare another release of Store Locator Plus with some new features I’ve decided it is time to up my QA-fu with Selenium.   I’ve been using Selenium IDE for a while now and find that , despite being free, it is one of the best user experience testing tools out there.    I’ve paid for a few testing tools over the years and I always come back to Selenium IDE.     The paid tools are do not offer a lot more and are just as complex to learn to get advanced testing techniques in place.

Simple Rollups

Speaking of advanced techniques, here is some new skills I picked up regarding Selenium ID rollups.   If you are not aware of what a rollup is, in the simplest form it is an easy way to group together oft-repeated commands in Selenium into a single entry.  This makes your test files easier to read and limits errors by ensuring consistency across your tests.   For example, of of my earlier rollups does a simple “syntax error” check which is very useful when running tests of my WordPress plugins or the SaaS version of the application when I have debug mode enabled.  It does little more than scan the body of the web page for known strings that typically mean the app crashed.

Here is what that rollup looks like:

 

Rollups With Arguments

Today I added a new level to my rollups: parameter passing.

This allows you to create a single rollup that serves multiple purposes.  For example, before today I had 4 different rollups.  One to open each of 4 different tabs in my application.   I’d call rollup open_general_tab or open_experience_tab depending on what I was about to test next.

Now I have a smaller more efficient rollup thanks to the use of arguments.   I now call rollup open_tab tab=slp_general or rollup open_tab tab=slp_experience and the back end runs a single small set of code.  That means less memory being consumed in the browser and less chance of errors as the codebase is reduced by nearly a factor of 4 in this case.

Here is my open tab rollup:

 

And here is how it looks in the IDE:

Selenium IDE Rollup With Args 2017-04-18_16-05-38

Selenium IDE Rollup With Args 2017-04-18_16-05-38

If you are not familiar with extending Selenium, you will want to review their documentation on creating rollups.    It is a basic JavaScript file that you include in Selenium IDE by setting the file name as a core extension.   I also strongly recommend getting a flow control extension to Selenium IDE as well.  I’ve “rolled my own” based on the ide-flow-control modules of others called “Go With The Flow” the includes basic if, gotoIf, etc. calls.

 

%d bloggers like this: