Sinatra is ideal since it supports the concept of mapping URL's directly to methods. Tricked out to render Haml and Sass, templates are simple and marked up with CSS generated by Compass. This combination is fantastic. Just scratching the surface of these packages, and already have been convinced it is the way to go.
]]>If you have 50 chairs for sale, and they are all the same chair, same cost, same style, same everything, except, they vary in color, do not create 50 chairs one for each color. Create one chair, with 50 variants, one for each color.
This simple code smell just sent me reeling from the reek and now it is time to fix it.
]]>Currently hooked up to the latest version. Installed it in my /workspace. It runs using the command line, ruby vision.rb. I symlink a Shopify store into Vision's /themes directory allowing me to keep the actual code in my Shopify Github account on a per client basis. Every project I work on gets assigned to a directory in my Github account. When adding a new client or store, I simply do a git add <new_client_store>, and all my work is versioned and available to any of my computers, be it a laptop or desktop, home or away. Vision does not provide all the luxury of a real Shopify store, but for 80% or so of what I need, it is fine. I can quickly download a copy of a site once it is localhost:3232 approved, and then upload it to the client's account for live tests.
Shopify's Liquid templates have access to images in the /assets directory using the filter image_url. This convention is not pretty in that most people usually want to associate images during development with a location like /images. In fact, Compass, talked about further down has no plans to ever support keeping images in the same directory as the stylesheets, so setup of Compass will take some effort to work with Shopify. When referencing an image in a Shopify stylesheet, eg: background: transparent url('foo.png') 0 0 no-repeat !important; it is important to note that Compass will only ever produce a background: transparent url('/foo.png') 0 0 no-repeat !important; or a background: transparent url('../foo.png') 0 0 no-repeat !important; which means trouble when using Sass mixins that use images. There are not so many of those though, so for that aspect of development, the workaround is to use the mixin, examine the generated CSS, and then copy it back to the Sass where it can be converted to Sass and allow for the mixin to be removed. Clunky, but it actually rarely comes into play.
Compass can be started with a number of options, included a stand-alone project, as well as one rendering one of the well known CSS frameworks like BlueprintCSS, 960gs, YUI, or Susy. I have had success with BlueprintCSS; 960gs has been somewhat harder to deal with, and I am sure YUI and Susy work well as there are few complaints about those options.
Once a stand-alone project is created, it can be added to Github for versioning the same as the Shopify site. I only add .gitignore entries for the .sass-cache files which you don't want versioned anyway. A line like .sass-cache/**/* added to .gitignore will not store any files or directories under the .sass-cache directory for example.
In the config.rb file, you can setup compass to work with relative files or not, and some other configurations. For the most part, I leave this at the defaults and check to make sure that any Sass files compile without errors using the command line compass -w. Keeping a terminal tab open with that alerts me to any errors in my Sass. Since compass is compiling Sass into CSS in the /stylesheets directory by default, it is important to link these into the Shopify project. In order to take advantage of Github, we cannot simply symlink the CSS files into the Shopify /assets directory. What I do, is tinker with the config.rb to tell compass to compile the Sass directly into the Shopify /assets directory. This works well, and means all the generated CSS and Sass code, are properly versioned and available to all my computers.
Sass is a way of writing CSS using a higher level of code abstraction, practically like writing Ruby. Sass has excellent constructs like variables and the ability to declare a block of code by name for use as a mixin. For example, I can create a variable called !bg_color = #22ff44 and then refer to that in any further Sass files that descend from where that was declared. Sass file organization is up to the developer, and I choose to use a similar pattern to any Ruby views whereby I load in base classes as partials, and then put together all my Sass source in just one or two main files. That helps to keep the clutter at bay, and allows me the freedom to count on mixins and variables wherever and whenever I need them.
With respect to Shopify for example, I keep a partial file called _cart.sass handy for all the declarations specific to rendering a Shopping cart in the style required by the client. Any grid setup is stored in a partial called _grid.sass and likewise, basic text formatting is in a partial called _text.sass. A main Sass file would then just @include partials/text.sass to take advantage of the text setup for the site. A header example might be:
@import compass/utilities/text.sass
@import compass/utilities/links.sass
@import compass/utilities/lists.sass
@import compass/layout.sass
@import partials/base.sass
@import partials/cart.sass
@import text.sass
I used to work by tweaking Liquid templates and saving them, and then previewing the results. This was a terrible workflow that seriously diminished my capacity to work on Shopify sites with enthusiasm. Now with compass and the provided workflow advantages, developing Liquid, Javascript and integrating CSS is excellent and fun.
The latest Vim, Scribe and Textmate editors all allow code to be saved as soon as the code editing window loses focus. Compass has a watch command that compiles code as soon as changes are detected in a Sass file. Hence, the split second after moving focus from code editing to a browser, the code has been saved, compass notices this, and it compiles Sass to CSS, allowing reviews in the browser. Even further, Firefox (which I don't use much anymore) has a new watcher plugin called xrefresh which will auto-refresh the browser when it notices changes to provided files. This implies that as focus switches from code editing to the browser, you can always see the latest greatest code without doing much more than clicking the mouse once when done. The author of Sass, Nathan Weizenbaum has even provided the community with FireSass plugin for Firefox, allowing developers to see the exact line of Sass code responsible for CSS issues in the browser. I have not quite gotten that far to have tried that, but it is nice.
]]>I will soon open my first and very own Shopify store, with custom courses as the product for sale. I will likely keep it simple, offering time in exchange for money. You can request a course to learn anything you want about Shopify, without fear of confusing or incorrect answers from the public forums.
Courses will be offered via screen sharing, Skype or perhaps even just the good old telephone system if desired. I am probably going to prepare some online slideshows as well, to assist in the course content delivery, and to keep focus.
]]>The algorithms I found published on the forums were by and large based on the following algorithm:
Imagine for a second you have a store with 1000 products. You want to display two related products to the one you are currently showing. With this algorithm, the outer loop has to run 1000 times to provide on of each product in the store. Inside this loop, you have to loop through not only every tag the Product itself has, but also the tags each and every other Product has, looking for any matches. This is terribly inefficient. Although there are probably ways to accomplish related products without this algorithm, more than a few stores are using it.
I have an application working on Heroku that provides support for using tags to connect to related products. Try out heroku to get a sense for what it can do.
The application presented here works a little differently than the above algorithm. Once installed in your Shopify store, the application provides a link to "Manage Relates Products". Pressing this link and opening in a new tab or window, brings up the application which will work with the product specified. The application will present a simple list of any established related products if any. It accepts tags in a text box. If you submit the tags, the application will search the store for all products with a matching tag(s). These products will be assigned to a Metafield for the product. The application will display the found matching products in a list.
At this point, if you were to view the Product using the established product.liquid template, nothing would appear different at all. There is no code in place to render any related products. To take care of this, we can include a liquid snippet called "related_products" to detect if any related products are assigned to the product. If so, render some new DOM nodes, and the data representing the product. With a little javascript, we can then turn the data representing the related products into nice clickable elements.
As an example of how tagging might be used to related products to one another, I have added four records from my collection to the homepage of my store at hunkybill. I used the Ramones record as a sample product. The following figures help to explain the rationale behind the related products. Each record is tagged a different way, appropriate simply as example tags, and not necessarily for music snobs to debate. When showing the Ramones record, if I chose the tag 1970's, I would want a related record to show up, which happens to be Peter Tosh (figure 1). If I changed my mind and decided the Ramones were suggestive of Reggae, I would want Peter Tosh and Fishbone to show up as related products (figure 2).
A simple Liquid snippet that looks for the Metafield related products is good enough to render inside the product.liquid template. The DOM markup is easily styled and can be modified for almost any purpose. If you use a DOM inspector to examine related_products_data you can see the Metafield string of product data. This container div should have a style of display:none to ensure no one actually visually sees this data as it is ugly.
{% assign mf = product.metafields.related_products %}
{% unless mf == empty %}
<div id="related_products_ct">
<div id="related_products_data">
{{ product.metafields.related_products.related_products }}
</div>
<h2>Related Products</h2>
<ul id="related_products_list"></ul>
</div>
{% else %}
<p>No Related Products</p>
{% endunless %}
Once the product.liquid template and the included snippet for "related_products" has rendered, the DOM can be checked to see if any of those elements actually exist. If they do, it is simple to read the Metafield string that contains the related products into a Javascript variable. The application stored the data using JSON encoding, so we simply reverse this by parsing the string as JSON. Now we have an array of objects we can iterate, with access to anything interesting about the product. We have the images, the variants, the prices, and can therefore render a nice looking related product with nothing but CSS and some DOM elements as needed. For the purpose of this example, I have simply used a list to render a link to the related product using it's title. The simplest code to do this can be added to any shop.js file (although the dialect of this example is jQuery, it would be dead simple to use any other flavour of Javascript you like).
(function($) {
// Product has related products
$.processRelatedProducts = function () {
var data = JSON.parse($('#related_products_data').text());
var list = $('#related_products_list');
$.each(data, function(idx, obj){
list.append($("<li>").append($("<a>").attr({href: '/products/'+obj.handle}).text(obj.title)));
});
}
})(jQuery);
$(document).ready(function() {
// DOM is loaded so we are ready to process whatever we want
if($('#related_products_data')) {
$.processRelatedProducts();
}
});
This application has plenty of room for improvement. For example, showing all the tags available like the Shopify admin uses for setting tags, limiting how many products end up as related products, searching only designated collections for products to add to related products, all easy modifications. The code for the application is open-sourced github for anyone who wants to hack away and make it better. Since some people do like to use tags to relate products, even if Shopify does not believe this is an appropriate use of tags, I hope that my application here can make at least some people happier in managing their related products.
]]>Sinatra is my favourite Ruby framework for many reasons. It is very easy to create Domain Specific Languages (DSL) processing, which is perfect for Shopify Apps. It handles models and Active Record well, has support for my favourite template system in Haml and comes in a small but very extensible package of only 2500 lines of code. Compare that to popular Rails, at 250,000 lines of code.
I run Sinatra using the thin webserver on Heroku, one the best cloud-based hosting companies I have ever worked with. Heroku accept an entire App codebase as a simple git remote, and so I can simply git push an entire App to see the magic in action.
A common request from Shopify store owners is to have some sort of external access whereby a third party, a Vendor perhaps, can login to a web application, participate in some aspect of Shopify and yet not have access to the Shopify site itself. I addressed this with an App recently.
Using Sinatra, and Rack, I setup three Applications in the one App, which for this blog we can simply name VendorApp.
VendorApp encapsulates three smaller specialized applications. ShopifyApp, PublicApp and AdminApp. These would be available at three separate URLs.
http://vendorapp.heroku.com/ is the way the public (external vendors) login and see their reports, upload their desired products, etc.
http://vendorapp.heroku.com/admin is where the Shopkeeper logs in and determines what his vendors can see, what they have been doing, etc.
http://vendorapp.heroku.com/shopify is where the ShopifyAPI lives and this is where sales reports can be tallied, orders filtered, products added etc.
Using Warden for Authentication, we can ensure vendors that try the PublicApp login can see only their sales, only their products, etc. With the same code we can also authenticate for access to the AdminApp, where the shopkeeper sees all his Vendors, all their products etc. The admin can control these accounts with a few clicks.
A simple sample workflow for this Application is as follows. Every product has a Vendor. So when an Order is paid for, using a Webhook, we capture the order, and extract the products and vendors. If the Vendor exists we add the quantity and price of the sold product to the Vendor Sales. A vendor has many sales. Now, when the Vendor logs in to the PublicApp, we show them how many products they have sold, and how much money they earned. The Shopkeeper can set a percentage for the Sales, so that a Vendor may see 30% of any Sales.
If you have any interest in this kind of App, drop me a line and perhaps I can install a version of this App into your Shopify store.
]]>When an order is booked, the customer and shop keeper receive an email that serves as a first communication. There are quite a few other email templates that can be fired off once the initial order is dealt with. Fulfillment seems to present some issues that I have dealt with recently using custom Applications.
One scenario is that when a shop is receiving a lot of orders, it is painful to fulfill them one by one. I created an App that accepts up to 250 orders at once from the Shopify Admin, and fulfills them automatically. Although that relieves the shopkeeper from clicking on 250 separate orders, it revealed some other problems. Sometimes it is important to fulfill those orders and NOT send an email. Sometimes it is important to fulfill those orders and send a CUSTOM message. My recent App allows for that by letting the Shopkeeper create custom email messages. When they select orders for special fulfillment, each order's customer gets the custom email. One super thing about Heroku is the easy integration with Delayed Job. Tobi from Shopify wrote this code to take on the running of background tasks. In my Apps, I now delegate all potentially long running tasks to background jobs. It is quite neat to see this in action. I watch the job queue as it fills and empties itself during normal operations of the App. The Delayed Jobs spin up their own worker force too and this helps them to work fast. Once all the work is done, they kill themselves off too.
Another scenario that cropped up was the fact that sometimes an order will need to be fulfilled more than once. That occurs when a shopkeeper accepts payment for a quantity of more than one item, while at the same time recording future delivery dates. At each delivery date, an already fulfilled order needs to be fulfilled again. Luckily one of my Apps is able to present the shopkeeper with future delivery dates so that these future fulfillments happen in an orderly and proper fashion. Along with multiple fulfillments, the shopkeeper can rely on the built-in shipping update letter to go out, or use the custom messages they store in the App.
These custom Fulfillment Apps are a great way to eliminate clicks in the day to day administration of Shopify stores that deal with fulfillments in ways just slightly different than the usual pattern.
My Fulfillment Apps are well tested at this point with many thousands of orders booked. If you have a twist on fulfillment you have to address with your Shopify store, drop me a line.
]]>Ruby developers are generally very familiar with rakefiles and rake tasks. Unlike makefiles and the make command, two things generally reserved for people that think wave equations are fun for breakfast distractions unlike the rest of us that just think waves are pretty can like to jump in them, rake and rakefiles are comprehensible and indispensable.
As with anything progressive, rake was deemed to be ripe for a re-factoring and during the glory days of Merb, before Rails switched to Merb-ishness, Merb had Wycats and Thor. Thor is like rake, but even more fun some would say. Thor tasks are yet another way of running ruby tasks so that developer life is easier and more automatic. Turns out, the Shopify developer gang built a Thor task command-line interface (CLI) into the Shopify gem. I finally got around to playing with it this rainy weekend, many months after it was introduced to me at a beer drinkup I attended with the Shopify nerds.
When you install the Shopify API gem
gem install shopify_api
you get this command line interface for free! In order to make it a little more compatible with my current environment, I tweaked it a little as per the gist here: cli.rb
If you copy that code into your favourite text editor and save it as cli.rb, you can experiment with this neat option. Once you save this file to your system (*nix compatible), you can mark it as executable with
chmod +x cli.rb
If you then type cli.rb list you will see nothing!!! Yay... unless you see errors, which would be mean you messed something up. If that is the case, and some error chunks are blown, edit the file till none are thrown.
Once going, try this:
cli.rb add mysite
You will be prompted that mysite.myshopify.com will be created for you, or you could type in something else.
Now, you fill in the API key, and API password for the site, and all of a sudden, this is a good thing. A configuration file has been made especially for this site. Big deal you say. Bah... Do that crap all the time... Okay.
So next, try:
cli.rb console
And now, you're in a full-blown IRB session, authenticated to that store. Wonderful. Now you can ask questions like:
ShopifyAPI::Product.count
and get an instant answer. No messing around. I used to just use IRB and paste in my Shop's Auth Key, but that is too painful, I can never remember all 280 digits off the top of my head. Even connecting to Heroku and asking my Shop for the credentials is a pain, and looking inside my note application also sucks. This is just a simpler, cleaner way to make a config file and hang out.
There are enough other commands to keep it all nice and organized. In fact, the current list is:
Tasks:
cli.rb add CONNECTION # create a config file for a connection named CONNECTION
cli.rb console [CONNECTION] # start an API console for CONNECTION
cli.rb default [CONNECTION] # show the default connection, or make CONNECTION the default
cli.rb edit [CONNECTION] # open the config file for CONNECTION with your default editor
cli.rb help [TASK] # Describe available tasks or one specific task
cli.rb list # list available connections
cli.rb remove CONNECTION # remove the config file for CONNECTION
cli.rb show [CONNECTION] # output the location and contents of the CONNECTION's config file
Neat. Enjoy. I know I will.
]]>So, I receive access to a store, as the store admin, deploy my code, and test out the Recurring Charge calls. It all works as expected in my experiments, so I call up the shop keeper and let her push the actual button confirming that yes, she is willing to pay the small monthly fee for using my small App to relieve her of tedious work.
I get another call to deploy the same kind of App. So I set out to see how that all works by accepting a staff account to the new store, and wiring it all up. After some initial glitches with my sloppy code migration, I hook into the App and it is approved, but I always end up at the "confirmed" page of my App sans a valid charge ID. WHA HAPPEN? WHUZZUP?
I load up my App with debugging and see that what ends up happening is that my App finds itself Authorized in the Shop, but the redirect ends up at the Shop Admin Account page. It gradually dawns on me, the problem is simple. I am NOT the Shop Admin, I am a pee-on staff member. So all my testing of the "Do you Accept to Pay XX$ for this App" are falling on deaf ears.
I asked my client to login and click on the App login... she is directly confronted with the correct screen, from Shopify, asking her if she accepts the charge for the App. She agrees, and ends up at my Apps confirmed script with the charge ID. All is well. Lesson learned.
]]>Scenario. You are running a very busy Shopify business, closing hundreds of orders per day. Your product is great, Shopify works great, and your customers are by an large happy.
Occasionally, no matter how well things are going, a customer is going to phone up and complain about something. The product was not quite what they expected. The delivery was late. The book was bent. The fruit was soft. The neon green was more like neon taupe. And on and on.
So you pull up their order and you're looking at your Shopify Admin screen.
You installed the We are Sorry! App and now you have that link at your disposal. You click it and the magic happens. The App looks up the customer, generates a 10% one time use discount for them, and fires off an email to them with the good news. The results speak for themselves.
What an easy way to make your customers happy again. Say you're sorry today!
You can also completely customize the actual discount. It can be cash amount, a percentage, anything you can do with the built-in Marketing tab discount codes.
When Shopify upgrades the Discount Coupon operations, the App will follow suit, so you can take advantage of one-click apologies.
You can see the App in action by trying it for yourself. It currently lives at sorry
]]>We used to have to write wrapper code to monitor and count API calls on top of writing complex program logic which was clunky to say the least. The ShopifyAPI development team kicked in by providing API call limits in the response headers, allowing an App to smartly self-monitor these limits during processing.
This was great except the development team forgot that the ShopifyAPI gem was not providing ActiveResource headers, making this small bit of progress tough to use for most people. Along came @christocracy who quickly hacked a small modification to ActiveResource and the ShopifyAPI gem providing simple and non-intrusive monitoring methods in support of API calls. In addition to now being able to know where the App stands with these limits, it is also important to run longer running API calls in background jobs (AKA delayed jobs) so as to not block the web App from servicing other calls. How to take advantage of this wicked combination of DelayedJob and the ShopifyAPI credit limits?
Assume a user selects ALL their (Orders|Products) in their Shop, and this amounts to a total of 603 items (a list of ID numbers), just over twice a shops typical API limit for continuous calls. To send these 603 (Orders|Products) to the App I setup the shop(s) with an injected link in the (Orders|Products) Action Drop Down menu. At the moment Shopify Admin has a bug limiting selection to 50 items, but let's assume we can send all 603 ID's as a GET parameter to our App.
Our App receives 603 ID's and needs to "touch" each (Order|Product) once (or twice, eg: read/update). For example by doing this simple operation:
That burns 2 API calls right there. So we are in some trouble since we can only do 300 API call continuously.
I decided to send ALL 603 id's to my method that will enqueue the request to process this list into the Delayed Jobs table I use. Delayed Jobs are run automatically by Worker threads that spin themselves in and out of existence, so as not to burn the App's credit card charges for worker fees to swiftly. I use Heroku and workers cost a nickel each per hour. The key to a Delayed Job is that you can control when it runs, what runs, and it does not affect your App since it runs in it's own thread.
When a job runs, the object that is instantiated can have code that checks the number of available API calls it can make. If there available API calls, we can interact with the Shop. If there are not enough API calls to complete a desired process, we have do take some steps to ensure we complete all the tasks that were specified.
We are processing a list of (Orders|Products) items so we need to keep track of the ones we have already processed and the ones we have not. This is easy by keeping track of the index of where we are in the iteration of the list of items. We can spawn a new Delayed Job, and since we know we have only partially processed our list of items, we instantiate the new job with the items of the list we have not processed yet.
Additionally when setting up a new job we can also specify when to run the job. If we tell it to run 601 seconds in the future, we should have a fresh slate of API calls available if the limit was reset. That is the essence of the code provided in the following gist. Of course, other actions could also be taking place that delay the availability of API calls, so the process should continue itself until it receives enough calls to complete. This means jobs that start and encounter a 503 response (no API calls can be made temporarily) should spawn a new job in the future, and terminate themselves properly.
I tested this code out and it works well. It processes as many API calls as possible before hitting the limit, at which point it spawns a new delayed job to start 10 minutes later. Watching the logs for activity, 10 minutes pass and the next delayed job was picked up by the worker thread assigned to run jobs. This job completed before running out of API calls and terminated gracefully, thus leaving the App in a good state, the shop in a good state and all was well that ended well.
Thanks to Chris for his excellent shopify_api_limits gem for making this possible.
]]>My plan for this summer is to finish off polishing some needed upgrades to some popular Shopify Apps I have created, and to continue servicing my great clients with code they need to expand their sales and e-commerce efforts.
My todo list is well populated, and with the birth of this version of a blog, hopefully I can share some of lessons learned about bringing to the market excellent supporting Apps for the Shopify platform.
]]>For this summer, with the kids all raring to go and needing entertainment, it was obvious that moving the family unit to cottage country would be idea. I remembered a place that was recently constructed near Labelle, QC with deluxe chalets complete with wifi, and so that is now a brief homebase from which to both relax and work. The wifi keeps me in touch with the cloud and clients, and the nearby lake keeps me hopping between bathing suits and cutoffs.
A fridge full of beer, a BBQ ready to go at a moments notice, and some nice deck chairs overlooking the forest completes a nice working situation. I could do this full time, but I think the novelty would wear off quickly for the kids. That being said, we have barely explored the local area and intentions remain to explore more by horseback and bikes.
So far the issues of managing the business of cloud computing and client expectations while living closer to the wilds is a good thing.
]]>It is a pleasurable process and sometimes I have deleted 2000+ lines of goofy customization code that never really worked well with 50 that do. The problem with the old solution is in how Shopify sets up cookies and sessions and how that totally affects the old patterns of using cookies and cart.attributes.
In the move to using Line Item Properties I just had a client inform me of a
problem when using them. If you render a nice input text box and assign it a
name like "properties[FizzBuzz]" and the customer types in some input like
"Shiny ball, arf arf arf" it gets assigned to the product as a line item
property and all is well. Now let's try another input from another customer that reads as "Mike & Mary up in a tree, k-i-s-s-i-n-g.". When we review this we will be somewhat shocked to see it processed as just "Mike". The reason is when we post the values to the Shopify cart, the Ampersand character is part of the Ajax parameters and that is not going to fly. If you POST data to the Shopify API to add a product to the cart, the data can be something like
"?quantity=1&variant_id=123456789&FizzBuzz='Coke Please, hold the straw'"
This will work fine. But in the second case we pointed out with Mike and Mary, it will bomb. It would like:
"?quantity=1&variant_id=123456789&FizzBuzz='Mike & Mary up in a tree, k-i-s-s-i-n-g.'"
And now we're in some trouble. The Ampersand in the value will mess up the data sent to Shopify and the merchant will receive the Mike but nothing else. Solving this is simple. We use Javascript and a regular expression to turn all the Ampersands that are in the line item property values to the correct representation of %26. Now we submit:
"?quantity=1&variant_id=123456789&FizzBuzz='Mike %26 Mary up in a tree, k-i-s-s-i-n-g.'"
That satisfies the HTTP POST and works well with the Shopify Javascript API.
]]>I don't want to rely on a 3rd party for support or trust that they'll be around by year's end, or know that it won't disrupt or break a part of my site.
And, sure, I don't want to pay extra for something that I perceive to be a basic feature.
The reason I am singling this out and mentioning it is because it is a problem and exposes a barrier we developers have to overcome. If we have trouble getting merchants to trust that an App they buy or subscribe to won't be around in a year and that Apps won't break their shop, our small businesses will fail for sure. It is likely that the percentage of merchants that think about these issues is not small. How do we ensure merchants can trust us? Do we need to provide status logs of our uptime? Do we need to count our transactions and display them in a dashboard?
The second point is even more troubling and has roots in the very existance of hosted platforms like Shopify. If Shopify states that as a platform, ensuring 80% of the most common e-commerce and merchant needs are met with excellence, the remaining 20% is considered to be not important enough to merit resources until some time in the future. Maybe the issues from this 20% pool will be resolved but for now they are simply regarded as suggestions. As developers, we live in this 20% pool of things Shopify does not do. It started with product customization and tweaks to try and make shops bilingual. With the introduction of the API hosted services could be built that merchants could buy or subscribe to that securely just plugged in. But as this quote shows, many merchants believe the services an App provides should just be baked right into Shopify. The language used is pretty consistent and usually goes along the lines of "I can't believe this is not part of the basic service".
As a developer there is also the issue of living with Apps over long periods of time and nurturing them along. I created an App that addressed a simple missing need in Shopify. For many months I enjoyed offering an App that solved some merchant's problems by allowing them to SEO their shop with more specific information and with the bonus of import and export for bulk editing. Then came the least glamourous side of business when the copy cats stepped in and copied my App to attract their own clients. Finally Shopify closed the loop and released upgrades that made these Apps superfluous for their original tasks. This kind of ongoing change or dynamics is complex and can be hard to manage when trying to plan for a future and to innovate. On the one hand you have merchants in a mindset that Shopify should really be everything even though that is clearly a terribly difficult thing to achieve. The 80/20 rule exists for a reason. On the other hand you know you could wake up and find the platform has just changed with the release of a new feature that puts your App out to pasture. A risk you accept but nonetheless it is disheartening when it happens.
I used to address the merchants that cried for features unlikely to make it into Shopify that they should lower their expectations on what they perceive to be necessary and instead try and build their business on maximizing what they can actually do first. If they do actually max out the capabilities of Shopify they are probably capable of paying to develop their own platform. It no longer interests me to fight that battle as experience has taught me that I lose that argument.
I am hoping that a few things eventually happen to make working with this platform easier for independent developers or third parties to Shopify. I hope that Shopify takes a more active role in identifying the Apps that are of true utility and use and that they not only promote them, but also provide those developers with better support. If you have an App with five or ten subscribers you're in a different class than an App with over a thousand or ten thousand. It would also be very nice if Shopify would provide developers with some metrics revealing more about the 20% pool of needs that are perceived as necessary by merchants, but are not met by Shopify. Additionally it would be great to know if certain features remain on or off the roadmap. Secrecy about the roadmap has always irked mrchants too I am sure. As a developer I am scared to develop any App that could be superseded by Shopify. The rumour mill abounds with whispered sweet nothings about new features about to be released. Charging a client thousands to solve a business need that is replaced by core Shopify is not something I would like to do.
I still think merchants need some education that there is a cost to doing e-commerce with Shopify that goes beyond their basic subscription. They do need to subscribe to certain Apps in certain cases, and that Apps cost money since computing is rarely a free service. A piper somewhere is very sad if he is not paid for his playing. Almost all cars have steering wheels and therefore a Chevy Nova and a Ferrari are kindred spirits. But a Ferrari goes 300 Km/h and costs $300,000 whereas the Nova cost $1200 and goes 100 Km/h. Saying you think the Nova should be a Ferrari is not going to make it so. And no, I am not saying Shopify is akin to a Nova, I just like old Chevys.
]]>Merchants are often presented with inventory feeds from suppliers that they can use to present more inventory in their shop than they keep on hand or in their direct possession for sales. Most of the time these feeds are available in XML or CSV and are suitable for examination in a spreadsheet or text editor. Sadly most merchants remain unable to use these feeds as they do not magically import into Shopify or any other e-commerce platform for that matter. This data remains in the wild unless some computing can be applied to it to make it possible to add it to a shop.
Suppliers can often provide data feeds to a merchants via FTP or rarely as an online feed that can be accessed using HTTP. To bridge the gap I provide merchants with access to a custom Dropbox where they can upload their feeds in whatever format the supplier provides. That way the merchant can function at an even lower level, via email.
With a data feed at my disposal I can download it using a script so that it can be parsed, to tease out the nuggets or jewels containied within. If the file is CSV text it can be parsed using the Ruby's CSV library. If the feed is XML Nokogiri is excellent at beating the XML into submission. It remains rare to have access to JSON which is unfortunate, but signifies how most suppliers still depend on outdated enterprise platforms incapable of pumping out JSON.
Often the suppliers data is dirty and needs cleaning. A quick example is that they may provide 10,000 inventory quantity numbers for variants along with a SKU that can be used to find the variant, but the reality is the shop only contains a small subset of those SKU codes. Instead of using the Shopify API to search for these SKU codes, I first produce an intermediate file that sets up all the work to be done with the actual shop inventory. Once this file is prepared it can reduce tens of thousands of API calls down to the minimum needed thus ensuring any inventory update be as easy as possible to manage.
After the data preparation phase I now have some tough decisions to make about how to use the Shopify API. Since a script only gets 500 API calls per 300 seconds, any script updating inventory has to be able to gracefully handle this limitation. The typical approach of detecting a 429 Out of API Calls and then going to sleep is the worst in my opinion as it ties down a worker thread for no reason and hence does not scale nicely.
I commit my work to a cache that my background jobs can access. So the first time a background job to update inventory commences, it reads the cache looking for work to do. If work is found, the script hits the API and chews through the work until the limits are reached. At that point the cache has had all the work completed removed from it and it is smaller. The background job schedules itself to restart in 301 seconds and terminates itself. Once this process has chewed through the whole cache leaving an empty cache, it then emails the merchant with a comprehensive report on the inventory updated, and it terminates until the next inventory update is scheduled. For convenience I provide the merchant with a manual button they can click to initiate an update, or I set a job to run at a scheduled time daily or hourly.
Some merchants are using this pattern I setup for them to process hundreds of suppliers and tens of thousands of variants for their shops. The pattern has proven itself to be invaluable since it can quickly be tailored to handle many kinds of data feeds, formats and quirks. It leverages scripts that access the API using background jobs capable of scheduling themselves with respect to API limits and by leveraging a data cache on prepared datasets, no complex data persistance issues creep into the algorithm.
]]>I have a vast majority of my Apps outside of the App store. On occasion when a Shopify customer or merchant would inquire about a certain functionality I would lay down some tracks to my company by informing them I already developed the App they need. It seems that if it is not in the App store for some merchants it does not exist. I understand that. Presence in the App store is somewhat of a vetting process, and offers merchants some reassurance that an App is real.
Some of the Apps I have developed that fall into that category of existing outside the App store do so for a reason. They are what I would consider to be too specialized for general purposes. A merchant has an exact need, and I met that exact need at that time. Over time however, it becomes clear that maybe others might benefit from the same App, with just a little generalization applied to the App.
Today marks one of those days where I started the process of launching a new App into the App store for general consumption. You just never know when others might see the value in it, and with a few inquiries under my belt, I know this one could be of some value to a number of merchants. The App provides a merchant with the ability to run their own auctions within their shop, but in a way slightly different from most auctions. A Dutch Auction is meant to sell a product by starting with high price, and then continuously lowering the price until the product is sold. If a preset minimum price is not met, the item is pulled from auction and remains unsold. This is a neat and useful pattern that can be applied to many types of products.
If a product is popular and bound to sell out, the merchant can start selling it with a very high profit margin to ensure some sales are directed at customers where price is no object and they must have the product at any cost. As time passes, the price is lowered but also the inventory of available product could be falling to, making it desirable for some customers to buy-in before waiting too long.
Producing this App took me a lot longer than I estimated for many reasons. When I produce an App for one merchant, there are often minimal needs at play, and I ensure those minimal needs are met. In the larger domain of the Shopify App store, there is no room for missing functionality in terms of what is presented in the interface and what is delivered. All buttons, clickable elements, response messages and code behaviors must be right. Ensuring that is a long drawn out process and often in testing I realize that things would be easier by changing around forms, or by adding some tabs. That is when a whole new view has to be created and tested and then new screen captures need to be made, new video captured and manuals re-written.
]]>Why can I not see the vendor(s) in the App when I just added them to my shop?
The answer came to me quickly as I found the answer to that question after the first time it was asked. A vendor is not created in the App until a sale is registered in Shopify for a product belonging to that vendor! Once the order is received by the App, any missing or new vendors will be present in the App.
In the meantime, before the Kitty was in the App store, I had written a rake task that updates the App so that any vendors missing in the App are created, by querying Shopify for an updated list of all vendors and their products. The upside of this task is that it also serves to find vendors where their name has changed in Shopify, and it propagates this name change into the App.
I added a new button to the Kitty located in the /shopify route with the navbar link preferences where subscribers see their options to import/export the cost prices of their products. Now there is a third option, to synchronize their vendors with the App.
Once a synchronization job is completed, the subscriber should receive an email telling them the job completed, and how many new vendors were added by the operation.
]]>You are like many merchants operating in the e-commerce realm and you have your shop inventory stored with company MaxiFulfillments LLC. Shopify made it easy for your orders to make their way to that company and they happily process your orders when you hit the fulfillment button. Thing is, you don't want to do that 100, 200 or 454 times a day. And to boot, even though the fulfillment company is great at shipping and dealing with your inventory, they only provide you with tracking numbers via an Excel spreadsheet that comes in your email box at the end of each day. You are loath to manually go through that file and paste 100, 200 or 454 tracking numbers into your orders and then close them. But damn, with close to 5000 open orders and no end in sight, it is time to do something about that.
When a merchant receives a spreadsheet with rows and rows of data to try and incorporate into their order management they quickly notice there is no where to upload this data. So I create a Dropbox App for them that they can use for this purpose. We use the same authentication as Shopify Apps use, namely oAuth, and this time we setup the provider as Dropbox. Using the App, they navigate to the Dropbox install option and it pops up the Dropbox App installer screen much like Shopify Apps present to the merchant. When they approve the installation of this App, a secret key and token are provided to the App and from that point on, any file the merchant places in their Dropbox App can be accessed by their Shopify App.
It is one thing to have success and then another to manage it effectively. When there are 1000, 3000 or 10000 or more open orders in a shop it is beneficial to close them to keep the interface presenting the orders clean and manageable. Given a shop with a large number of orders to process, we download the ones that are open and paid, and we call that our haystack. It's a good idea to cache them during development since downloading takes a long time. The haystack is pretty much an array of order ID's and perhaps a name.
The next thing to do is download the merchant's CSV from the Dropbox containing the needles of interest. Typically a row from a CSV contains at least an order ID and a tracking number. Row by row a parse of this data builds up a new data structure called work which is an array of order ID's and the tracking number to associate with them. This work is all constructed in a delayed job since it can take a long time to construct it. This job stores the work in the cache using a key labelled work and before terminating, spawns a new job that will chew through the work.
The second job spawned comes to life by opening the cache and looking for a key called work. If there is anything to be done, the process is simple. Using the Shopify API open the order specified in the work file. Create a new fulfillment for that order with the tracking number and the setting for whether or not to alert the customer using the Shopify Shipping Update email. Once that fulfillment is created, the order can then be closed. If during this cycle there are no more API calls remaining, the job spawns a new copy of itself to run in 5 minutes and it terminates. As long as each successful entry in the work array is removed upon completion, this cycle is perfect for chewing through thousands of API calls without worrying about the limits.
A Shopify App to fulfill orders and add tracking numbers can be built using the Shopify API, Dropbox and Delayed Job. The merchant need only upload a CSV to Bropbox and initiate the updates with the press of a button.
]]>The IT company in question prompted me on how we would exchange the CSV data files and I presented my usual answer which is "any way you want". From past development over the years my list of exchange methods used includes FTP, sFTP, REST, Dropbox, EDI, SOAP (gag!) and plain jane email attachments. I presented this list and they responded that FTP would be fine as long as I could nail down an IP for my service. That is problematic as I utilize the cloud for hosting my Apps and in this case, my SaaS provider Heroku is not the easiest service to use with a nailed down IP.
I suggested we try HTTP and REST whereby I could POST the CSV data to their web service instead of FTP. That seemed to go over well and I waited for the URL to POST the Shopify data to. To be fair I also mentioned that as long as the endpoint could be tested via cURl or some other tool like Chrome's REST App I was happy.
I was surprised when I received an email pointing me to the HTTP service I requested. I was a little confused since it was a .jsp or Java Server Page URL but I tried it anyway. It turned out to be a static file uploader page! That was definitely not in my shortlist of expectations from the IT provider. I informed them that a file uploader was problematic as this process of fulfilling orders does not involve a human being browsing directories for files and locating one followed by pressing a submit button.
Learn something new every day though. It turns out that with a carefully crafted cUrl command you can indeed use a static file uploading script to exchange data. The key is that curl will accept a file and an URL and then assign it to the multi-part form on the static page and submit. For example:
file = '/tmp/fizzbuzz.csv'
result = system("curl --form 'file=@#{file}' http://www.thirdpartylogistics.com/secret/santa/motorhead/upload.jsp")
The result comes back after a few seconds as OK (or not) from the result of shelling out and using cUrl from my Ruby app. At Heroku I can use the /tmp filesystem for a use like this so it makes some sense. The IT company informed me that this all worked fine.
So the takeaway from this is that cUrl is perfectly adequate as an exchange mechanism when faced with using a static upload service. The IT company is happy since the uploaded files are going to a protected sandbox on their systems and not directly into a sensitive zone of their in-house processing.
Now we just work out the reverse process where they provide me with the Shopify order ID and a tracking number and the loop is closed for this merchant. They can spent their time and efforts on marketing and other important aspects of their business without worrying about manually processing and fulfilling their orders, nor about informing their customers about the status and tracking of orders.
]]>Jumping forward, some brainiacs came up with a better system called CORS or Cross-Origin Resource Sharing. Nice acronym. I want to get at some resources on a different origin, so this is the way to go.
Some developers have skimmed the Shopify API and seeing a bunch of JSON and some basic POST requests they get naked and jump into the party head-first thinking they are going to make a super cool Shopify hack that will wow their benefactors. They forget that the Shopify API is protected with authentication tokens that should never be present in a client-side code dump like Shop rendering. They can use the Shopify Ajax code to get a cart, add/remove items from a cart, format money, and many other front-end tasks, but there is not a whole lot going on that is risky there. A script kiddie can GET a resource and pick apart some internals, but not much else. For instance some Apps pass themselves off as "price" manipulation masters but anyone can simply view all the variants and their prices with no trouble using this front-end Shopify Ajax code.
Getting information to and from the back-end of a Shop is the goal here, and we need to do it with some reasonable security. One sure way to be secure is to float an App in the cloud that is installed in a shop. Consider an App like that to be nothing more than a Proxy. It can authenticate to the shop and using the Shopify API it can create, read, update or delete resources (CRUD). Since floating an App typically means running a web server of some sort, we have the ability to now build in CORS protection. That means we can decide many things about incoming requests that make us happy or sad.
Something that makes us happy is a crude check that the request is providing the HTTP_ORIGIN of where it is coming from. That can be spoofed of course, but we definitely do not want to answer the bell and say we're home and open for business if it is not part of the request. Second, we can be happy if the origin is in fact our Shopify site. So if we check that fizz buzz.myshopify.com is the origin of our request we are happy and we continue allowing the request to be processed.
Now we want to restrict the incoming request to be one of GET, POST, PUT or DELETE. CORS comes with coverage for those. So we can ensure that the request coming in is a POST if we want. We process it and send back JSON to the original XHR sender. As a quick example the request payload could be a logged in customer's email address and we want to return their tags, or we want to add a new tag to a customer. Or we want to do some other work that the App is approved to do. So now the developer can in fact write some Javascript code to affect a customer's session in a shop. The callbacks will work, and there are no cross-domain origin refusals such as many developers experience with Shopify when they get mixed up about how to handle this.
A basic simple demo here uses a neat little gem called sinatra-cross_origin that was written to wrap or encapsulate the mundane CORS settings into a helper method.
config.ru gets our Basic Rack App up and running, authentication to a shop with permissions to read/write customers via the API
require 'bundler/setup'
Bundler.require
require './app'
SCOPE = 'write_customers'
use OmniAuth::Builder do
provider :shopify, ENV['SHOPIFY_API_KEY'], ENV['SHOPIFY_API_SECRET'],
:scope => SCOPE,
:setup => lambda { |env| params = Rack::Utils.parse_query(env['QUERY_STRING'])
env['omniauth.strategy'].options[:client_options][:site] = "http://#{params['shop']}" }
end
run App::Shopify
module App
class Shopify < Sinatra::Base
register Sinatra::CrossOrigin
post '/customer/email' do
halt 403 unless request.env['HTTP_ORIGIN']
origin = request.env['HTTP_ORIGIN']
# the shop is known as fizzbuzz.myshopify.com
if origin =~ /fizzbuzz/
cross_origin :allow_origin => origin
puts "Customer email might be: #{params[:email]}"
# todo: some neat stuff with the incoming parameters and the Shopify API
content_type :json
{:success => true}.to_json
else
halt 403, "Illegal CORS call from #{origin}"
end
end
end
end
]]>
There are occasions where a framework or toolkit or plugin do not help directly with the solving of a problem. This is interesting to me and it reveals an all to common situation where some guy, gal or team of dudes/dudettes run into some wall or barrier with their open source project and they immediately fire up email, IRC, Twitter, Hacker News and other sources to complain. Whiney waa-waa comes out and they rant, bitch and complain that RoR is too slow, or jQuery plugin XYZ does not address a mouseout event properly or gem ibblefart throws some obscure core dump on their precious PC running Windows XP. This is annoying in the sense that most of these complaints are directed at people that have volunteered their time and energy to provide something of value to the community at large. I think people should be able to voice a concern in such a way that they expose the issue, provide some sane discourse on why it's necessary to bring it up and address it, and then leave it as a simple comment. A point of discussion perhaps. But not the beginning of a rant and whiney diatribe.
I am currently in a situation where I am rendering Shopify data in the form of a Tree View. A tree is a nice data structure where the trunk and branches and leaves of a tree are clearly defined for the end user. In my case, clicks on a leaf have to be dealt with in two dimensions. I need both the product ID and the variant ID for my App to be of utility. The product ID is the branch, and the variant ID is the leaf. The problem I encountered with the Tree View library I am using is that a click on a leaf provides only the variant ID. It has no knowledge of the branch holding it! At first I found this a bit disturbing. How can the company providing this free Tree plugin not know that someone has to know the branch that a leaf belongs to in one click! Getting down and dirty with the tree code itself, it turned out to be pretty simple to figure out what is going on. The developers of this library chose to not provide specific functionality but instead they chose to provide for any extras through a simple abstraction mechanism. If you need to keep track of the branch your leaf is on, you can easily render your leaf with the branch identity attached to it. Turns out that it is really just up to me to fine-tune how I use this library for my specific uses. Another example I found was that the library fires off a selected event when clicking on a leaf. It turned out that it does not fire a corresponding unselect event and that also troubled me. I need to know when a user deselects something. That was also easy to hack. I found where the tree code determines that a node is deselected. At that point I simply trigger the unselect event I need along with the node as a payload. Voila. Bob is my Uncle. I now have a tree that works perfect for me, and my App. It is Bootstrap3 so it looks good, and I know that I am playing with a full deck here, and not polluting the universe with a whiney rant that library tree view is not good because it does not do X or Y.
There is a lot of satisfaction in this process that keeps an old developer like me going. When faced with the pace of change out there, and the need for speed in terms of delivery, it common to beg, steal and borrow code from others to make delivery possible. Taking the time to learn what you're exactly borrowing is crucial at times. You do not need to learn all 250,000++ lines of RoR code to be effective with it, but if you stray into territory where it is not able to help you, you should be able to dive in and fix it up. That is the whole point.
]]>I chose to do a talk built around a Ruby Toolbox. What are you carrying around in your toolbox to help with the day to day work of writing scripts that quickly address and solve common issues. I chose to talk about delayed jobs that can be run in the background of some other work. This is ideal for API work when latency and speedy responses might be necessary. A lot of issues require scheduling and recur often, so I included some code that provides for an easy way to resolve these issues. I felt like I was managing the time allotted without stumbling too much so I included a few brief examples of how I use the gems in my toolbox.
The wrap up and questions after were not difficult to field and answer (thankfully) so the take away was that I want to do this kind of presentation again. I also learned that as a speaker I can never rely on the hardware at the venue. In this case, the Notman house was providing a VGA compatible projector but my Mac Book has only a Thunderbolt display output. The other speaker was tied to Windows with a Toshiba so he had no trouble with VGA output but I was stuck without the dongle. In a pinch I tried deploying my slideshow to Heroku so it was available online and hence I could just borrow his machine as a surrogate and click the arrow keys on his machine. I came close but no cigar. My slideshow was using revealJS with grunt and nodeJS as requirements. My first ever push of a nodeJS project to Heroku, during the other presentation was failing. I knew I was under some pressure, but this was ridiculous! Some vague JS error was holding me back from glory! Turned out to just be a simple matter of deploying the right version of Express. A kind enthusiast at the meetup volunteered to run home and pickup his dongle for me, so all was well. With that I was able to present without issue.
The slideshow is on Heroku.
]]>One of my clients decided to take my dare and hook-up. Nothing good came of that. I checked my App logs and found lots of nice horror. R14 out of memory, bad buffers, failed floobles... ya baby... A quick code inspection showed me I was downloading 50,000 orders to make somewhere between 50,000 and 150,000+ new DB rows. Thing is I stupidly wrote my loop getting those 50,000 API orders as one that stored them in ALL their entirety as objects in memory. Nice!! Except my little cloud process has only 500 MB allocated to it. 50,000 full orders will almost certainly squeeze/bust that limit quicker than you can shake a stick. All I needed was the 50,000 ID's and an array of 50,000 integers can easily fit in memory like that A-OK.
So the takeaway is to be careful when inviting your clients to try out something where the size of their shop's business can crush your App to a pulp in no time.
]]>The App has a preferences page that allows the merchant to turn on or off currency exchange tracking for sales. When currency exchange is turned on, the merchant has been prompted to select the currency they wish to make their sales in. We could just use the setting from Shopify but for the time being, the selection of the country to use is available from a country list dropdown. A shop selling goods in Brazilian Reals would select Brazil, and swith the selector to ON, before hitting the Save Currency Setting button.
When currency tracking is ON the Profiteer App will request the exchange rate for the selected country at midnight Eastern Standard Time and record the value for use in the following 24 hour period of time. Additionally the merchant will now be presented with a currency tracking selection per product. By default, all products are not tracking currency even if the shop has currency tracking ON. It is up to the merchant to cherry pick which products are tracking currency when recording sales. That setting is available when editing a product for cost of goods sold.
As a convenience, the latest value of the currency exchange is displayed, and the merchant has the option to use that exchange value in sales by ticking ON currency tracking.
When the merchant turns ON currency tracking for the Shop itself, the Export COGS option will include an extra column in the CSV file with the current exchange rate for any products that have currency exchange ON. It is currently Column 10 in the CSV. If there is a value in that column of any kind, that implies the merchant would like currency tracking enabled for that product. A blank or empty value in that column implies no currency tracking is ON for the product. When importing the file, not only are COGS values updated to match the CSV, but the App will enable/disable currency tracking as well.
When a sale is recorded of a product, Profiteer App will first figure out if the shop has currency exchange rate tracking on or off. If the setting is ON the next question asked is whether or not the product has currency exchange tracking on or off. If currency exchange tracking is ON the App will take the cost price from Profiteer App, multiply by the current exchange rate, and then record that resulting value as the cost of goods sold for the product at that time.
The result is a profit or margin calculation that is much closer to reality! A merchant that buys some or all of their stock from the United States can set their cost prices in the currency of their shop, and know that Profiteer will convert that value to the equivalent in US dollars at the time of the sale.
]]>Profiteer provides the option to use a average cost price (AVCO) calculations for merchants that need this kinds of accounting support. The formula and its pattern of use will be explained first, followed by an explanation of how inventory levels may be affected.
To illustrate how this change works we will take an example product that has a current Shopify inventory quantity of 10, with a current cost price of 10.00. Assume no items are in any customer carts for now. The merchant purchases 20 items to add to the inventory quantity of this product, with a cost price slightly higher at 12.00. The cost price value to use for all future sales is found from a formula taking the total cost of inventory over the total number of units. In this case the new cost price will be found as:
(10 * 10) + (20 * 12)
_____________________
10 + 20
= 340 / 30
= 11.33
When a merchant exports COGS using the dashboard button Export COGS, a CSV is compiled of every variant in their shop, including the current inventory quantity and cost price. The merchant can change the cost prices and then import the CSV file. Profiteer App will transfer the cost price values from the CSV to each variant's cost price used for sales. No inventory levels are changed, and the onus is on the merchant to ensure they used reasonable value for their cost prices. Manual cost price averaging is probably the way to look at this.
In Profiteer App Preferences (reachable from the App banner or the Action button on the dashboard), there is a switch to turn AVCO calculations on or off. When switched on, the Export CSV will be compiled with two new columns for use labelled as AVCO Inventory Quantity and AVCO Cost Price. The values are always zero in the Export. Any rows with zero values are ignored by the Import making these columns benign unless changed.
If AVCO is on Profiteer App will try and process each variant with the new values provided for inventory quantity and cost prices. The existing inventory level of the variant in Shopify has to be non-negative. Additionally Shopify has to be selected for the variant's inventory management. If those two criteria are not met then only the value in the cost price column will be applied to the variant. In other words, conventional updating will take place where only the cost price is changed, matching the cost price value in the CSV (not the AVCO Cost Price value).
For variants with positive Shopify inventory quantities, a value for new inventory and a new cost price value, the formula for AVCO will be applied with all four of the inventory and cost values to produce the new weighted average cost price. This value will then be used to the variant's new cost price.
The AVCO new amount of inventory will be automatically added to the existing Shopify inventory through the use of the built-in Delta inventory change. That means Shopify will respect the amount of and inventory item sitting both on the shelf and in any live customer shopping carts. The merchant can complete two crucial tasks at once with the AVCO process
All legitimate questions but also I think they all come from a well-meaning but uneducated viewpoint with respect to transportation logistics. I had brief, tumultuous but excellent education in that business before dotcom implosion numero uno, and that industry kept me afloat for at least a few years after that first dotcom implosion. Transportation Logistics, or just Logistics if you will is a mature industry, with software and systems well ahead of anything from Shopify or other ecommerce platforms in terms of reach, stability and sheer money being hustled through them. FedEx or UPS anyone?
Any merchant can do what they wish with shipping, a super nice touch from Shopify a lot of merchants do not realize. When a merchant complains and throws out a boomerang about Shopify not doing something right or good enough, the boomerang that comes back should knock the sensible answer of plugin whatever your heart desires mister into the mix.
Shopify will hand off to any reachable endpoint on the Internet (an URL) a payload to base shipping calculations on. The shipment origin which is where some truck will leave from with the goods, a destination that represents where the goods are going, and a list of the items in the manifest itself. The payload includes some juicy details like how many widgets of type X are to be shipped, how much each widget weighs in grams, the price, the product's Shopify ID and its variant ID.
And no, there is no information in the payload about any Discount Codes used by the customer, so good luck if you are trying to give away shipping if the customer buys more than $100 in merchandise where you gave them a discount of $25 and therefore their cart with a $100.01 product is worth only $75.01 and you're ticked that they get that invalid Free Shipping.
Imagine you are a clever shipping rate calculation machine with a payload to work with. So you have some grams. Maybe 12,700 grams. What if you want to ship them in a roundish tube that is 24 inches long and with a circumference of 6 inches? Wait a second are we mixing units here? grams and inches are not the same unit family. So now Mr. Smarty Pants machine has to ensure all units are consistent. So you ensure you have all Imperial units like inches and ounces, or centimeters and grams. Make a mistake in division by 10 or multiplication by 2.2 pounds per kg and your shipping rates could be stupid.
Assuming you have all that straight now you need to quickly decide if the weight of the package is more expensive to ship than the dimensional weight of the shipment. What is that? It turns out that shipping a small heavy thing could be as expensive as a feather in a big box. So the industry uses the more expensive value of either weight with no dimensions or weight with dimensions. So if you want to present a non-stupid rate to your customers, you better have it together and know your dimensional weights. But wait, those are not exactly easy to deal with in Shopify. Some Apps purport to make it easier but for this exposition lets just use the case here of things being in your control and you are the machine.
Did we forget to mention that you better return a rate to the customer toute de suite, like right now, as waiting too long means Shopify will drop your answer like a stone and inform the customer you have no rate to provide them shipping. Not good. The backup Shopify rate could be nasty.
Back to the action, your going through the items in checkout and you have a single product ID and its variant ID and the quantity and grams to be shipped. What can you do with all that? Ask FedEx, UPS or USPS for a rate and they will certainly give you some rates to return to the customer, but very likely not the best ones possible. For the best answer you have to ask for a rate based on the box that holds the weight, along with your special customer specific shipping key they gave you. Hang on, so how do you get that boxes' dimensions. And what if the quantity of 3 of that item implies not one, but three of those as yet unknown boxes is appropriate. Is this complex yet?
So you store dimensions in a data structure you can access given a product and variant ID. That way you can quickly lookup the needed length, width and height of the box to ship some thing. And you make a rule that either as many things fit in that box, or each thing fits in its own box. Or you calculate the volume of each thing, and the volume of the box to hold those things, and as you stuff or fill the box with things, you keep on stuffing them in till the box is full, and then you move on to another box if needed, and you keep on stuffing it, and at the end of the box stuffing exercise, you ask FedEx, DHL, UPS, et al for how much they charge to ship all these boxes from the origin of the shipment to the destination, where the customer lives. And you present the rates to the customer, and let them choose the one rate that is right for them.
Along the way, if you find a product that is free shipping, you ignore it in all these calculations, so that the customer receives the package with the free item at no cost. That is an easy one! If the destination zip code is in a small list of zip codes that have no delivery charge, so be it! Do not charge the customer for delivery! If they live in a zone considered to be a $25 surcharge, then surcharge away and add $25.
Shipping things in boxes is crazy fun. That is something I know. But if you think you can just whip up a little script and plugin to Shopify and fix all your shipping problems, you are probably right, and high on something! But you might be wrong. And you might try some el cheapo App that purports to be your saviour, and you will be faced with a terrible screen to setup that App, with a couple of thousand senseless clicks to set all your inventory up, because you chose to use that el cheapo solution and they cannot therefore customize things just for you, but for everyone, and if everyone has different needs, well then, there is a chance that there is high level of configurability that will be in your face. And you will hate that. As it will eat at your soul and suck your time like nothing else to ensure your customers get presented with $14 and not $43. But because you present them with $43 they will abandon you in a millisecond. And if you charge them $144 but the real charge was $288, you will be bankrupt in short order. Shipping is something you have to get right.
And it is fun. For me. But I have to think long and hard about all the time spent trying to deliver the right prices, all the time, for everything. Thanks be to Shopify for releasing a very helpful library of shipping code that gets rates from the big dogs of shipping provided you setup the questions properly. And that is everything.
]]>I have attempted to do this change almost ten times, and each time I have met with failure. The technology stacks are so shiny, new, full of promise and yet terrible all at the same time. I spun up a Gatsby, a Hugo, a Netlify, a Ghost, a NextJS, and probably a few others. Nothing seemed very appealing. Almost all are some twist on the same old same old but executed with some cute new underlying tech and a prayer. Most of seems to dip into the React world, and that just does nothing for me. I don't need React, never have, and likely never will. I know how to use it well enough, but I find it unnecessary for most day to day driving. Then I found out Jekyll was really dead, and I did not feel like putting in the effort to follow its successor which is apparently Bridgetown.
Eleventy follows the pattern Jekyll established, and allowed me to quickly tack in some basic Javscript tools that made blog website computing come together very fast. I like that the templates are either markdown or nunjucks which is pretty much Liquid but for Javascript.
A few minor problems were ironed out pretty quick and voila, I am happy that I can publish a quick and dirty blog post with almost no effort, have it get compiled into a tiny static HTML website, and yet still have powerful deluxe extras like TailwindCSS V3, and ESbuild. No need for complex webpack or much use for node for that matter.
]]>pie
"Ruby" : 386
"Javsscript" : 85
"CSS" : 15
]]>
To simplify the merchant's workflow, we send orders via a webhook to the custom App for processing. The interesting details in the order are primarily the customer. We setup a job for the customer, something that can be later manipulated into a fulfillment or delivery. As part of setting up the job, we include the items purchased as the actions associated with the job. In the case of a grocer, the items are typically food, and the action concerns the details of the food. So how much does it weigh, what did it cost, etc. All for organizing a delivery.
Once the delivery is setup in CIGO, we can issue the tracking information back to Shopify and keep the customer informed of their delivery.
This is a fascinating aspect of Shopify not many people get into, as Shopify really has no quality local delivery last mile equivalent to CIGO built in. Time will tell how useful this is to many merchants, but for now, I am certainly impressed with the number of options that suddenly become available to a merchant when they hook-up their Shopify sales to CIGO.
]]>So now we have super computers in our watches, and a modestly priced Raspberry Pi is more powerful than the entire Bitcoin algorithm. I invested in a i7 iMac with a 5K screen and 64GB RAM five years ago as my development machine. Today it is still a workhouse, and if I replace the SSD with a larger, faster model, the computer should last another 5 years. But boredom with MacOS led me to try out M1 in a Mac Mini. Unfortunately, networking the mini into my flow is painfully slow, and there is no point in trying to use an M1 mini as a remote networked machine. It needs a screen and keyboard to be truly appreciated. That got me thinking though, about virtualization and emulation and running one computer from another. I SSH into a Raspberry Pi in my garage workshop that is running Octoprint, so I can monitor and control my 3D printer without having to chug up and down the stairs, from my home office. It is almost instant on, and I have full control of the awesomeness of the Pi, but only because I am using a shell and terminal, not an entire GUI OS.
So I discover QEMU as a thing and never bother to figure it out. Whenever I am faced with having to work from a consistent software experience, all the typical energy there is expended in Docker. Kubernetes is for masochists with time on their hands. Docker is neat, but the constant churn I see in maintaining Docker itself is a bit of a turnoff. Not a day goes by when I am seeing the whale offer me a new update. The docker containers are not changing, but Docker itself is. Constantly.
So what about QEMU? I was using Virtual Box to try out Linux on Mac, and that was straight up a terrible experience. Nothing was nice about it. From the full manual downloading of iso files, to the partitions, to the fact the display looked like garbage, I was once again relieved to see how Linux on the Desktop was still full on hacker level fun. I had Parallels for my Mac when I had to pretend to service the one or two people that would approach me with a Windows only problem, and if I am honest, it was only so I could run whatever that Microsoft browser used to be, before they capitulated to Chrome by default as their browser. IE? I don't remember.
But never did I spring for VMWare. Paying for virtualization seemed like too much work, when I had it for free with Virtual Box. I had no idea that QEMU was actually the way to go. So I installed it, and found a guy that has released a patchset to QEMU that runs OpenGL in full 5K on a MacOS host. So I grabbed PopOS as a test, and organized two small shell scripts to 1) install PopOS and then 2) run/boot into PopOS. It worked pretty much as advertised right out of the box. Without me doing much at all, I have a MacOS desktop workspace running PopOS. I can three finger swipe through my MacOS screens, and end up in PopOS as one of them. My mouse, keyboard and network all work fine in this environment, and the screen is fully 5K Retina. That is awesome. So now I have the full Linux desktop experience via PopOS running in a window the same as VSCode or iTerm, or Firefox.
The neat thing is the QEMU does not consume resources unless I instruct it to do something, in which case, the entire Mac is subject to sharing. That is totally acceptable to me. It is so nice to be able to do this. I can see trying many different *nix flavours with this QEMU. While not quite as specific as Docker is to programming, this does offer the neat option of spending time configuring, customizing, and tricking out a state of the art OS while maintaining continuity with a 5 year old workstation. I can see updating my SSD and copying these files on the new one, and instantly being right back where I was before, but with a few TB of faster storage at my disposal. I can bet the people selling computers hate this, as they always want you to buy the latest shiny new toys, but in the current incarnation, I am quite happy to try and eke another five years out of this machine, as it is plenty for me, and I have access to all the latest stuff without struggling, thanks to amazing tools like QEMU!
The OpenGL implementation to play with: QEMU OpenGL
]]>