A media file is the digital file format of the recording of a song. The most common format today is MP3 which comes is various “flavors” that determine the quality of the audio. MP3 is considered a “lossy” format which means it uses compression algorithms that can trim off pieces of the music data that it thinks the users will not hear. FLAC is another common format that is considered lossless. It uses compression algorithms that restore ALL of the original digital data as it was received.
The quality of any media file recording will depend on the original sound recording, the master recording, and how it was turned into a “digital master”. Many variables impact the quality of the work including the type of equipment used in the studio and the quality of the digital signal processors (analog-to-digital / digital-to-analog converters).
Prince – Negotiating Rates For His Music
For some time Prince’s legally team has worked to pull his music off YouTube and the song “Breakfast Can Wait” remains the only track on his channel.
Sources say Prince is using Web Sheriff to send notices to digital services. According to sources, the notice from Web Sheriff says that Prince has pulled his music from all U.S. PROs, so there are no reciprocal rights abroad. He wants all digital service to pull down his music, as recorded by him, so that once they have complied, they can negotiate with his owned publish company.
Once they agree to whatever rate he is seeking, digital services can then put his music back up. The takedown notice doesn’t impact those songs of his covered by other artists. Services are still allowed to play songs like Sinead O’Connor’s “Nothing Compares 2 U,” which Prince wrote and she covered.
Free Market Vs. First Ammendment Copyright Law
There was agreement on how to make licensing a free market negotiation. David Israelite, President and CEO of the National Music Publishers Association, prefers that Congress abolish the compulsory license so publishers could freely negotiate with licensees for mechanical rights. Lee Thomas Miller, songwriter and President of the Nashville Songwriters Association International, called for Congress to eliminate or “drastically alter” the ASCAP and BMI consent decrees. Michael O’Neill, CEO of BMI, called for the elimination of the consent decrees. “We’re trying to give the songwriters and publishers the power to make their own deals.”
CRB Rate Setting Methods
Under current U.S. copyright law, whether and how much these copyright holders get paid by broadcasters for the use of their intellectual property depends on a dizzying mix of factors. For musical composition copyrights, the royalty system is generally reasonable and technology-neutral, with broadcasters typically paying in the range of 2 percent to 5 percent of their gross revenue to the holders of music composition copyrights.
However, for sound recording copyrights, the royalty rates vary dramatically depending on who is playing the tunes. For digital music broadcasts, the three judges on the Copyright Royalty Board (CRB) determine “statutory” royalty rates using two different standards: one called 801(b) that applies to older services like Sirius XM satellite radio, and one called “willing buyer/willing seller,” which is used for the newer field of Internet radio.
Unless you are a math nerd you will likely skip this article right about… now.
For those math nerds that are still reading, I learned something new today that I found interesting. It makes perfect sense and is one of those “why didn’t I think of this” moments. Working on a territories algorithm for Store Locator Plus presented a problem I’ve not had to solve in the past 4 years of building the production. How do you determine if a given point on earth is inside of an area that is described by a series of locations that represent the boundary of a territory. In plain English – “When a user says ‘I am here’, is ‘here’ within the territory serviced by a company?”
Point In Polygon Algorithms
There are a number of ways to determine if ‘here’ is inside a given area. In mathematics locating ‘here’ in a territory can be directly associated with the point in polygon problem. ‘Here’ is the point where the user is now and the latitude/longitude combination represent the x,y coordinates for that point. The polygon is described as a series of latitude/longitude (x,y) coordinates that form the outline of a polygon. You can now employ a number of algorithms to calculate if a point is inside the polygon, such as the “Winding number” algorithm. However my favorite is the “Even Odd Rule” algorithm due to its simplicity and the speed at which it can be computed. Winding number uses “circular math” which involves things like sine and cosine which are computationally expensive.
Even Odd Rule
Even Odd Rule uses the given point and creates a ray from that point that traverses at least one side of the polygon. If the ray crosses an even number of borders it is outside the polygon. if it crosses an odd number it is in the polygon. There is a caveat where if it is ON the border it will be considered “outside” but that can be a matter of semantics ; “you said INSIDE not on the edge”. Also , for territories the < 1 meter of distance that Store Locator Plus uses with floating point decimals representing latitude/longitude, it is probably fine to lose that 1 meter to the “on the border” rule.
Calculating the number of “border crossings” is fairly easy and operates quickly unless you have an extremely complex polygon with thousands of points prescribing the border. That won’t be the case for my product. The efficiency and accuracy of this algorithm is perfect.
Sometimes you can discover beauty in the simplicity of what otherwise can seem like a complex problem by using math to describe your world.
Yes I know. I’m a math geek.
Hopefully this article will save at least one other person an hour of their life trying to figure out why they cannot clone a Bitbucket repository when using SSH.
My projects are broken into several teams, each with their own developer and administrator users. Each team has a number of repositories that are being managed. There is one common denominator; I have admin access to all repositories. That means my Bitbucket user should have full read/write/admin privileges on all repos. Yet no matter how many different keys I added to my Bitbucket user account it would not allow me to clone several repositories.
For those that want the short answer of what worked… use the “long form” SSH URL.
git clone ssh://email@example.com/<team_or_user_account_name>/<repo_name>
While I typically use the “short form”, as noted below, this absolutely would NOT work for certain repositories or different pre-shared keys even on a repository that uses the “short form”. Sadly the short form is what Bitbucket serves up when you look at the clone interface on their website. Here is a short form of the above URL:
git clone firstname.lastname@example.org:<team_or_user_account_name>/<repo_name>
Secure Access To Repos
With private repositories you always want to use some form of authentication to prevent people with the URL from cloning your project. Your options with Bitbucket are to use an API Key , use OAuth, or setup SSH access with shared keys. API Keys can be nice but you need to have an app that will manage your “handshake” with Bitbucket and interface with your git app or sytem-level network stacks. OAuth is similar but allows more control with a user/password type setup so you lock out one person whereas the API key is an “everybody/nobody” solution. SSH is already setup on any system you will use and with a little effort you can quickly learn how to create your public/private keys and share them.
Setting Up SSH
For Linux/OSX systems you can quickly setup your SSH keys by logging into the account you will using to do the clone. You will need a .ssh directory and will generate your SSH keys. There are plenty of articles on how to do that including Set up SSH on the Bitbucket site. In short you will run ssh-keygen, copy the id_rsa.pub contents and add it under your user account in Bitbucket.
Cloning Your Repo
Normally you can just go to your directory and do something like this:
git clone email@example.com:storelocatorplus/store-locator-plus.git
Maybe that is a bad example since that repository is wide open and won’t require security, but the concept is the same. The problem is that for some repositories you get something like this:
Cloning into 'store-locator-plus'...
<strong>conq: invalid command syntax.</strong>
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
That is very special. It appears to be unique to Bitbucket , though I’ve not researched that so don’t take that statement as fact. It also seems to only occur if you are running ssh-agent as instructed on that “Set up SSH” article cited above.
If you are not running SSH you may see this instead:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
No “conq” but essentially the same message.
If you follow the “debugging SSH connections” article on Bitbucket they tell you to run the ssh -Tv firstname.lastname@example.org command to get some clues as to why your authentication failed. The problem is that my SSH sessions were NOT failing when tracing the connection. I was getting a valid connection to my user account on Bitbucket.
Finally, after all other possible solutions failed I tried the alternate URL format. It cloned the repository.
git clone ssh://email@example.com/storelocatorplus/store-locator-plus.git
Magic. Black magic. Possibly even evil. But it works.
Hopefully this saves you from creating a dozen different SSH keys on a half-dozen different servers. It may save you from setting up alternate identities or complex SSH authentication models. You probably don’t even need to run an ssh-agent if you only have a single key for your login. In short this simple trick may save you a lot of frustration.
And some people wonder why I’m bald. Lifestyle choice? Nah. I’m just another code geek that has been at this for a long time.
Any of the tech geeks that have worked for me in the past have heard me say it a million times.
“Check your indexes , people!”
It is the single-most overlooked issue that often yields the biggest performance gains on any SQL driven data system like WordPress, for example.
I cannot tell you how many times a junior coder or systems person has walked into my office and asked me to help them resolve an application performance problem. The first thing I ask them is if the problem is data related. The very next question is “have you checked your indexes?”. More times than I can count they find an improper or missing index. Using SQL tools like ‘explain’ and building a proper index for the query that is causing problems can yield big performance gains with little effort.
Today I ran into a performance problem using my WordPress Dev Kit plugin that serves plugin updates to my WordPress plugin customers. The dashboard on the sales site was horrendously slow. The admin panel would take up to a full minute to load. Building the right index on my data table brought that time down to less than 3 seconds.
Finding The Problem
I started by installing and enabling Query Monitor on my site. This allowed me to see what was taking so long to execute. The first report, a red herring, was the 18,000 entries from the wp_options table that was being loaded.
Turns out there were 15,000+ entries for _site_transient_brute_loginable , all of which were set to autoload. That means WordPress was loading all 15,000 outdated and obsolete Brute Protect transients.
After deleting those 15,000 entries, Query Monitor brought me to the real culprit. There were 4 database queries that were running slowly. ONE of the queries was coming from my WPDK plugin. It was only selecting 20 records , the 20 most recent entries, from a table with only a few data points. However that table has 800,000+ rows and grows by a few thousand on a daily basis.
Fixing The Problem
The problem is that even though I was only asking for 20 records the “select the newest” was the problem. MySQL had to read the ENTIRE database to find which 20 records were the newest. Adding a simple index to the table fixed that issue. Building an index on the lastupdated field allows the order by lastupdated DESC clause to utilize the index and read only 20 nodes from the index to fetch the record. It is MUCH faster. As in 57 seconds faster on a 60 second query.
As I’ve said before… CHECK YOUR INDEXES PEOPLE!