Why Your WordPress Plugin Should Have Almost Nothing In The Main Folder

As we continue to roll out our Store Locator Plus SaaS service built on top of WordPress as our application foundation we continually refine our plugin, theme, and API architecture.    One of the issues I noticed while testing performance and stability is how WordPress Core handles plugins.    Though WordPress caches plugin file headers there are a lot of cases where it re-reads the plugin directories.

What do I mean by “read the plugin directories”?

WordPress has a function named get_plugin_data().   Its purpose is simple.  Read the metadata for a plugin and return it in an array.   This is where things like a plugin name, version, an author come from when you look at the plugins page.

However that “simple” function does some notable things when it comes to file I/O.   For those of you that are not into mid-level computer operations, file I/O is one of the most time consuming operations you can perform in a server based application.   It tends to have minimal caching and is slow even on an SSD drive.   On old-school rotating disks the performance impact can be notable.

So what are those notable things?

It is best described by outlining the process it goes through when called from the get_plugins() function in WordPress Core.

  • Find the WordPress plugins directory (easy and fast)
  • Get the meta for every single file in that directory using PHP readdir and then…
    • skip over any hidden files
    • skip over any files that do not end with .php
    • store every single file name in that directory in an array
  • Now take that list of every single file and do this…
    • if it is not readable, skip it (most will be on most servers so no saving time here)
    • call the WP Core get_plugin_data() method above and store the “answers” in an array , to do THAT, we need to do THIS for all of those files
      • call WP Core get_file_data() which does this..
        • OPEN the file with PHP fopen
        • Read the first 8192 characters
        • CLOSE the file
        • Translate all newline and carriage returns
        • Run WordPress Core apply_filters()
        • Do some array manipulation
        • Do a bunch of regex stuff to match the strings WordPress likes to see in headers like “Plugin Name:” or “Version:” and store the matching strings in an array.
        • Return that array which is the “answers” (plugin metadata) that WordPress is interested in.
    • take that array and store it in the global $wp_plugins variable with the plugin base name as the key to the named array.

In other words it incurs a LOT of overhead for every file that exists in your plugin root directory.

Cache or No Cache

Thankfully viewing a plugin page tends to fetch that data from a cache.   The cache is a string stored in the WP database so a single data fetch and a quick parsing of what is likely a JSON string and you get your plugins page listing fairly quickly.   However caches do expire.

More important to this discussion is the fact that there are a LOT functions in the WordPress admin panel and cron jobs that explicitly skip the cache and update the plugin data.  This runs the entire routine noted above to do that.

Designing Better Plugins

If you care about the performance impact of your plugins on the entire WordPress environment in which it lives, and you SHOULD, then you may want to consider a “minimalist top directory approach” to designing your plugins.

Best Practices on the Plugin Developer Handbook mentions “Folder Structure” and shows an example of having something like this as your plugin file setup:

However they don’t get into the performance details of WHY you should have an includes directory and what goes in there.

In my opinion, EVERYTHING that is not the main plugin-name.php or uninstall.php file should go in the ./includes directory.  Preferably in class files named after the class, but that is a discussion for another blog post.

If possible you may even want to try making plugin-name.php as minimalist as possible with almost no code. Even though the fread in WordPress Core get_file_data() only grabs the first 8192 characters, most of that content is “garbage” that it will not process because it is not part of the /* … */ commentary it is interested in.   If you can get your main plugin-name.php file to be something like 4K because it only includes the header plus a require_once( ‘./includes/main-code-loader.php’); or something similar the memory consumption, regular expression parser and other elements used by get_file_data() will have less work to do.

No matter what your code design, it is going to have some performance impact on WordPress.   My guess is it will be especially notable on sites that have 3,987 plugins installed and are running an “inline” WordPress update.   Ever wonder why that latest version of your premium (not hosted in the WordPress Plugin Directory) plugins don’t show up?   It could be because WordPress spent all the time granted to a single PHP process reading the first 8K of 39,870 files because all those plugins had a dozen-or-so files in the root directory.

Help yourself and help others.  Put the bulk of your plugin code in the includes folder.  The WordPress community will thank you.

 

Hacking Snagit PNG Files For Clarity

While working on some updates to the Store Locator Plus website I ran into a rather annoying issue when it came to the logo for the website.   On the documentation site the logo had excellent clarity on the text beside our Store Locator Plus logo.  When creating the same exact logo in the size I wanted the graphic on the new site was always far less crisp than on the documentation site.    It turns out that Snagit is not very good at rendering OTF fonts to PNG for Retina displays.

The Original

The original image used on that site is from Snagit.   Unfortunately Snagit is still unable to handle Scalable Vector Graphic (SVG) formats so the graphic portion of the logo is a Snagit screen capture from Firefox rendering the SVG on a retina display.   This provides a high resolution graphic that Snagit does a fine job scaling down to my 100 pixel height without a lot of artifacts.  The text does not scale as well so I use the native Open Type font (OTF) file for the Moon typeface and use the text tool option to add the text to the logo.

This is the end result of the standard PNG output from Snagit 4.1.1:

Browser Scaling

As it turns out , using Snagit to scale down the original graphic from the two-times too large size to the size I want for the header doesn’t work as well as I had hoped.   In the past using a graphics application to scale images ALWAYS yielded superior results to in-browser scaling.  It also means faster page loads as an image twice as large scaled by the browser is downloaded something twice as big as it needs to be.    This is much like driving a bus to the grocery store instead of the family sedan.  Not very efficient.

However, the resolution of the scaled images from Snagit does not retain the clarity needed for today’s retina displays.   For those that are not aware, Apple Retina displays and now many 4k displays have more than TWICE the resolution of monitors from a few years ago with the same real estate.   They use internal trickery to make older low-resolution images look right on the screen and absolutely shine when encountering graphics and photos that are shot in much higher resolution.   Images look clearer, better defined, and have deeper more natural looking color palettes.

As it turns out, the web browsers on retina devices tend to do a far better job at scaling images these days than most common graphic apps.    Here is the same image at twice the resolution in both the native size and scaled.

Browser Native 700x160

Browser Native 700×160

Browser Scaled To 350 x 80

Browser Scaled To 350 x 80

Snagit Original PNG 350x80

Snagit Original PNG 350×80

If you are viewing the images on a high resolution screen you should notice that the Browser Scaled image in the middle is better defined than the Snagit image that was sized specifically to the 350×80 dimensions.

PDF Files and DPI

Being the scientist and tech geek that I am I was not going to just use browser scaling and move on to my next project.    It is a disease that impacts my productivity every day, but I had to know a bit more about what was going on and I wanted to get the CORRECT results not a hack that worked “well enough for today”.   I wanted to serve up a 20KB image NOT a 96KB image every time someone loaded my page.   Less bandwidth is still important for all those people using pay-for-what-you-consume mobile plans and is a good idea in general for page loading performance.

I soon discovered that exporting the image from Snagit to a Portable Document Format (PDF) file always kept the image crisp and clear.   This makes sense as a proper PDF format contains graphic images and font data internally with the document.   That means the special font in my image has all of the formulas necessary to draw each letter using math to describe a curve or a line instead of binary on/off pixels arranged in a specific pattern.   I makes for much clearer looking fonts at a variety of resolutions.

It also turns out that from within the Preview application on MacOS you can export that PDF to a PNG file with an added twist that is not available in Snagit.  You can set the dots-per-inch (DPI) of the file being output.    The more dots you have to draw something like a letter “O” the clearer it looks.   Think about it.   And “O” made up of 4 dots doesn’t look much like an “O” though it does make a good diamond-shape.  With 40 dots that “O” resembles a circle.   This meant that I could set the DPI to various levels and see how it looks in the browser.  I ran a series from 72 to 600DPI.  Here is what I got:

Snagit 72dpi PNG Save

PDF Export 72dpi (350×80) No Scaling Necessary

90DPI (437×100) Browser Scaled to 350×80

150dpi Browser Scaled to 350×80

300dpi (1458 x 333) browser scaled to 350×80

600dpi (2916×666) Browser Scaled to 350×80

To get a better idea of what DPI and how it impacts the clarity when drawing circles, click the above images to view the full size non-scaled version.   Zoom in so the images take up the same space on your screen and you see the pixel artifacts hidden in the image that your brain will perceive even if you can’t quite point to it on the original image.

And just for fun , a copy of the 720×160 image converted to pixels instead of an OTF font within Snagit then scaled down to 350×80 for comparison.

Snagit 700×160 Image Versus OTF Scaling to 350×80

Snagit OTF Saved As Native 350×80 Image

Summary

There are a few conclusions to be made from this exercise.

  • SVG formats need to be more widely accepted.
    They are far superior for scaling and rendering of images.   This logo and font would look much nicer and only require a single mid-size file to look great on ALL screen resolutions and ANY size.

    • Snagit needs to support SVG reading and writing.
    • WordPress needs to allow SVG formats anywhere PNG, GIF, and JPEG images are allowed.
  • The native output DPI for PNG files from Snagit is 72.
    This makes sense as the “web standard” for screen output has always been 72DPI. Printed media was 300DPI then 600DPI then 1200DPI.

    • Screen media standards need to be updated for the high pixel densities found on today’s devices.
  • Browser scaling algorithms on high resolution displays is currently better than both the Preview and Snagit apps I used in these tests.

This is by no means a complaint about Snagit or any of the other applications noted here.  In fact I LOVE Snagit and use it as my go-to for creating instructional videos and images for online documentation.    Hopefully these findings help you determine the best solution for your image management so you can balance performance and the user experience for your web and mobile apps.

 

Save Money With RDS / Get Your gSuite Mail Forwarding

Two quick hacks I learned today while doing some general “tech life” maintenance.

Amazon Web Services – Saving Money

The first “hack” is an easy one that I am now kicking myself in my own ass for not picking upon 6 months ago.    This is a $600 oversight that is a LOT of beers-worth of savings.   The trick is simple…  RESERVE INSTANCES.    Especially with RDS.

It turns out that for RDS (and possibly EC2 and other instance types) you can “purchase” a No Upfront Costs reserved instance.   The cost?   FREE.     The savings can be substantial.  For an M4.Large size instance it can be as much as $80/month!

I have been running a M4.Large sized RDS instances for months.    The hourly rate is $0.350.    That runs $3066 per year or about $255/month.

By purchasing a reserved M4.Large instance for 1 year with no upfront fee the rate drops to $0.241/hour or $2,111 annually.   That is $175/month.    A quick $80 for clicking a few buttons and pressing submit.

In my case I opted for the partial up front payment.   In this case you pay $648 today as a one-time charge for that M4.Large reservation for 1 year.     The hourly rate drops to $0.132/hour , $1,156/year or about $150/month.    That is $105 less than what I was paying.  Sweet!

gSuite Email Forwarding

I have been using gSuite / Google for Work / Google for Business or whatever other of a half-dozen names they’ve used in the past 10 years for… well … about 10 years now.     One of the key features I use regularly is the email routing which allows a single domain-wide email address to be forwarded to both in-company gSuite and non-Google users (such as Yahoo! email addresses).     It makes it easy to give customers (or my son’s tennis team) a single short email address that goes to multiple people.

It turns out that the forwarding rules can be written as regular expressions.  Add a gSuite email route for skunkworks@yourdomain.com and set recipients to a half-dozen different email addresses and anyone that sends email to skunkworks@ will get broadcast to the team.

It turns out that the regular expressions allow you to do some pretty cool things like send email coming in to skunkworks@(yourdomain|mydomain).com to hit that same group of people whether  the person sends to yourdomain.com or mydomain.com on the email.

Pretty cool.

But … this system happens to break a common tenant of nearly every major email system in existence.  It is CASE SENSITIVE.

What?!?!

Yup.   Email to Skunkworks@ will NOT go anywhere and the user will get an “undeliverable” message for everyone not on the Google family of services.

Well it turns out a simple hack can fix that.    add ^(?i) at the start of the expression to make it case insensitive.

 

Selenium IDE Extensions Hacking

I use Selenium IDE as a tool for testing the Store Locator Plus WordPress plugin.   It is a great tool for automating browser interactions and sussing out basic problems.  With the launch of the My Store Locator Plus SaaS service we need to build more complex tests for multiple accounts and services.    Thankfully Selenium IDE not only has a myriad of plugins but makes it easy to create your own.

While “officially sanctioned” Selenium Plugins are true Firefox browser plugins you can deploy your own commands written using the Selenium IDE objects and interfaces.   These can be included by listing your local JavaScript file in the core extensions file list.

Now for the bad news.   The documentation for building extensions is horrid, outdated, and typically non-existent.   Which is where this article comes in.   This is my personal notes on how to hackify your own Selenium IDE extensions.

You can create your Selenium Extension as any other JavaScript program.    When you’ve created it you can add it via the Selenium Options menu drop down under core menu extensions.

Selenium.prototype.do<command>

This is the JavaScript command that you use to define a new Selenium IDE command.   If needs to point to a function that is passed 2 parameters.  The first parameter is what the user put in the target field.   The second parameter is what the user put in the value field.

This would add a new command you invoke with MyCommand in Selenium IDE.

The command definition in the JavaScript prototype must start with an uppercase.

The command in the SCRIPT FILE will start with a lowercase.

this.continueFromRow( <int> )

Run the Selenium IDE command in the current test from the specified entry.

throw new Error( <string> )

Generate an error that will be logged by Selenium.

Selenium Variables – storedVars

Any store commands will place the variables in a named index in storedVars[<name>] where name is the value you assigned during the store.

testCase

The Selenium object that contains details about the current test being run.

testCase.debugContext.debugIndex

Get the current debug index.    In theory the currently executing row.

testCase.commands

Contains the array of all the commands for the current test case.

testCase.commands[<int>].type

Type of command.    At least one valid type is ‘command‘.

testCase.commands[<int>].command

The command the user entered, first entry in a Selenium row.

 

More Info

You can find some of my hacks based on the Sideflow Selenium plugin on Go With The Flow over at Bitbucket.

Some jQuery Foo I Learned While “Leveling Up” This Weekend

I am the first to admit that I got on the JavaScript bandwagon a little late.   I was a bit hesitant because of my work with government projects a half-decade ago.   When you work with the US Government you quickly forget the “best of” when coding web apps and instead use their default protocol of using “the oldest crap possible”.   I would not be surprised if they are still using Internet Explorer 6 as their go-to standard.   For the non-geeks, that is the equivalent of setting the standard vehicle fleet to a Ford Model T.  Sure, it is a car that runs on petroleum but it sure as hell isn’t going to get you and your family ANYWHERE safely.

Just 2 years ago I started adding some JavaScript to my locator web app.    It helped bring my 2013 app up to 2001 web interface standards.  A little.   Then I learned about jQuery, a library of features and functions that does a lot of the heavy lifting for you.   It is like going from sawing your own lumber from trees to going down to the lumber yard and picking up 2x4s to build your home.   Much easier.

The Slightly Newer But Old Way

Then I learned jQuery and many of the pre-built “nice to have frills” come shipped with WordPress Core.  What?!?!  Why do 90% of the plugins and themes, from which I snarf a lot of code to make it look like I know what I’m doing, not know this?    During the past year I’ve been learning a lot of new code tricks from my friends at DevriX and teaching myself more by learning new things like advanced jQuery trickery.

New Themes

So now, way down here after my rambling, are my notes on what I learned about jQuery this weekend where I felt myself “level up” on that particular skill.

As you read these tips you’ll notice that I use jQuery “long form” vs. $ which is common practice. I have a good reason for that; lots of WordPress plugins are poorly written and assign no-conflict mode with the $ shorthand improperly and break my application.  When you have 15,000 installations you tend to do things “long form but less prone to others breaking it”.  When I write jQuery… in my examples you most likely see $ instead in “real world” code.

Cache Your Queries

When you want to work with an element on a page you can use jQuery to help find the element and make your changes.    You tend to see stuff like this:

That is not very efficient. This jQuery( <selector> ) reads the entire web page each time and makes a whole lot of JavaScript code run EVERY TIME it is processed UNLESS <selector> is a JavaScript object instead of a string.   Lots of code running = slower web apps.

Instead make jQuery “cache” the objects that it finds the first time around by assigning the selection to a variable.  The “lots of code” runs once in the example here and in the examples below it will create a subset of elements to look through versus your entire web page stack of elements.

In this mode jQuery reads the entire web page ONCE and stores the matching objects in the_dash.   It then can quickly modify just those elements at requested.

Extend Your Cached Queries

Now that you are caching your queries and making your site visitor’s laptop or mobile device work a lot less , which believe it or not can extend their battery life by a whole microsecond, you can extend those caches without doing the “whole read the page thing” again.

Here is how I used to find the sidebar, modify it, then find all the images in the sidebar and hide them:

Nice short code which is a little easier, maybe, to read, but this is horribly inefficient in the “350px” mode.   In this mode JavaScript is reading the entire web page, seeking the sidebar, and changing it.   Then it goes and reads the entire web page again, finds the sidebar, then reads everything in the sidebar and finds the images and changes them.    That is a lot of JavaScript code executing.  Executing code takes time. Time is money as they say.

And here is the far more efficient version:

In this mode it reads the entire page once, and keeps track of what it found in “help_sidebar”.   It then changes what it found without searching again because jQuery is working on help_sidebar which is an object.  If that were all we were doing with it  this would actually be a bit slower since we take the overhead of storing the object with an assigned memory pointer (variable assignment) as noted above.

However when we do the second “change all the images inside that object” we gain back that lost microsecond one-hundred fold.     The second jQuery(help_images)… that is used to modify the image within no longer has to search the entire web page.

BUT… there was a problem.    How do you add “extended selectors” to the cached jQuery?

Above we had ‘.dashboard-aside-secondary > IMG’  to find our images.  This is MORE than just the ‘.dashboard-aside-secondary‘ that we stored in our cache.    Uggh.

Find() To The Rescue

Luckily jQuery has a number of methods that extend your selectors and help you traverse the DOM.  You can find this under the Traversing jQuery docs page.

find() can take any selector or OBJECT, like the one we have containing our sidebar, and then look for any elements inside of it.   As a jQuery padawan I had only ever seen this used to find stuff within the entire DOM.   Being a slow-learner it never dawned on me that this could be extended to ANY part of the DOM not just the entire DOM.

jQuery(help_sidebar).find(‘IMG’) looks within the sidebar only to find images.  This is far faster than reading the entire page.   It then changes those images within.

Children() Is One Level of Find()

One of the incorrect paths I went down, but is very useful to know, is the use of children() in jQuery.    This finds only the matching elements just one level deep in the object stack.   Since you’ve read this far you are a code geek like me so I know that you understand that most web pages are many levels of nested elements and often you want something “deeper down” where you need your great-great-great grandson to be involved.    However there are plenty of cases where I can utilize children() to impact just the next level of menu divs, for example.

Summary

Truly understanding how jQuery selectors and “caching”works and how to modify those cached selections with the jQuery traversal methods is going to bring the efficiency of my apps up to a whole new level.    It may only save a half-second of processing time per page interaction, but it all adds up when you have 15,000 websites hosting millions of page views every day.

For my fellow code geeks out there I hope you learned something new and I’ve given you a shortcut to reaching the next level of your jQuery skill.s

Sidebar: Why “caching” in quotes? Because this doesn’t seem like caching to me but rather object-passing, but maybe I’m missing something I’ll learn at level 3.

%d bloggers like this: