Technology Blog

Magento 2 Madness with IsSalable returning nothing

by on Feb.13, 2020, under Uncategorized

This morning I had to troubleshoot a third party module (Amasty Out of Stock Notifications) that was not sending emails out properly. After wrongfully thinking the module was junk (we used it on M1 and it worked great, but had doubts about the m2 version), I jumped full blown dev into coding / debugging mode.

This lead me to this line:


if ($allProducts && $product->isSalable()) {

So I of course printed out the return values of these. $allProducts returned a collection but isSalable returned nothing. Not a zero and not a 1.

Looking into isSalable is just a very big fun mess that does a whole lot of things that I never really ever wanted to look under the hood about. Luckily, I realized it used a bunch of stuff form the indexer….

That’s when I headed over to my indexer status page in the admin panel and saw they were processing. Then I noticed they have been processing always. Every minute the cron fires off, they pick back up. That’s not right….

So I decided to run them via CLI


[staging@dev htdocs]$ php bin/magento indexer:reindex
Table status for design_config_dummy_cl is incorrect. Can`t fetch version id.
Table status for customer_dummy_cl is incorrect. Can`t fetch version id.
Table status for catalog_product_flat_cl is incorrect. Can`t fetch version id.
Table status for catalog_category_flat_cl is incorrect. Can`t fetch version id.
Table status for catalog_category_product_cl is incorrect. Can`t fetch version id.
Table status for catalog_product_category_cl is incorrect. Can`t fetch version id.
Table status for catalogrule_rule_cl is incorrect. Can`t fetch version id.
Table status for catalog_product_attribute_cl is incorrect. Can`t fetch version id.
Table status for cataloginventory_stock_cl is incorrect. Can`t fetch version id.
Table status for inventory_cl is incorrect. Can`t fetch version id.
Table status for catalogrule_product_cl is incorrect. Can`t fetch version id.
Table status for catalog_product_price_cl is incorrect. Can`t fetch version id.
Google Product Removal Feed index is locked by another reindex process. Skipping.
Google Product Feed index is locked by another reindex process. Skipping.
Table status for catalogsearch_fulltext_cl is incorrect. Can`t fetch version id.
Table status for amasty_feed_entity_cl is incorrect. Can`t fetch version id.
Table status for amasty_feed_product_cl is incorrect. Can`t fetch version id.

What the heck! Luckily, thanks to some stack overflow articles I found out that either the auto_increment value has been corrupted or mysql just needs to refresh some kind of table status cache it has. So the fix here was simply:


analyze table `design_config_dummy_cl`;
analyze table `customer_dummy_cl`;
analyze table `catalog_product_flat_cl`;
analyze table `catalog_category_flat_cl`;
analyze table `catalog_category_product_cl`;
analyze table `catalog_product_category_cl`;
analyze table `catalogrule_rule_cl`;
analyze table `catalog_product_attribute_cl`;
analyze table `cataloginventory_stock_cl`;
analyze table `inventory_cl`;
analyze table `catalog_product_price_cl`;
analyze table `catalogsearch_fulltext_cl`;
analyze table `amasty_feed_entity_cl`;
analyze table `amasty_feed_product_cl`;

ANalyze, causes that “internal cache” nonsense or whatever if it, so update and then our indexer ran just fine on one of our test servers.

On another test server, we were met with more errors:


Catalog Product Rule indexer process unknown error:

SQLSTATE[HY000]: General error: 1419 You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable), query was: CREATE TRIGGER trg_catalog_product_entity_after_insert AFTER INSERT ON catalog_product_entity FOR EACH ROW
BEGIN
INSERT IGNORE INTO `catalogrule_product_cl` (`entity_id`) VALUES (NEW.`entity_id`);
INSERT IGNORE INTO `scconnector_google_remove_cl` (`entity_id`) VALUES (NEW.`entity_id`);
INSERT IGNORE INTO `scconnector_google_feed_cl` (`entity_id`) VALUES (NEW.`entity_id`);
END

Product Price indexer process unknown error:

SQLSTATE[HY000]: General error: 1419 You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable), query was: CREATE TRIGGER trg_catalog_product_entity_after_insert AFTER INSERT ON catalog_product_entity FOR EACH ROW
BEGIN
INSERT IGNORE INTO `catalog_product_price_cl` (`entity_id`) VALUES (NEW.`entity_id`);
INSERT IGNORE INTO `scconnector_google_remove_cl` (`entity_id`) VALUES (NEW.`entity_id`);
INSERT IGNORE INTO `scconnector_google_feed_cl` (`entity_id`) VALUES (NEW.`entity_id`);
END

Luckily here, the fix was pretty easy too:

just log into mysql from cli, and run:


set global log_bin_trust_function_creators=1;

And then all should be well:


[staging@dev htdocs]$ php bin/magento indexer:reindex

Design Config Grid index has been rebuilt successfully in 00:00:00
Customer Grid index has been rebuilt successfully in 00:00:03
Product Flat Data index has been rebuilt successfully in 00:00:01
Category Flat Data index has been rebuilt successfully in 00:00:00
Category Products index has been rebuilt successfully in 00:00:00
Product Categories index has been rebuilt successfully in 00:00:00
Catalog Rule Product index has been rebuilt successfully in 00:00:00
Product EAV index has been rebuilt successfully in 00:00:00
Stock index has been rebuilt successfully in 00:00:00
Inventory index has been rebuilt successfully in 00:00:00
Catalog Product Rule index has been rebuilt successfully in 00:00:00
Product Price index has been rebuilt successfully in 00:00:04
Google Product Removal Feed index has been rebuilt successfully in 00:00:00
Google Product Feed index has been rebuilt successfully in 00:00:00
Catalog Search index has been rebuilt successfully in 00:00:00
Amasty Feed Rule index has been rebuilt successfully in 00:00:00
Amasty Feed Products index has been rebuilt successfully in 00:00:00

Lastly, for a healthy magento, You’ll want to make sure your indexer are set to run on a schedule:


php bin/magento indexer:set-mode schedule

Leave a Comment more...

Migrate a Vultr instance to XCP NG

by on Jan.31, 2020, under Uncategorized

So today I migrated a linux VM from vultr to an internal xcp-ng box.

We used clonezilla to store the image to a wasabi bucket. Then we used clonzeilla on the xcp-ng machine to bring it down.

We ran into an issue booting the server after the restore. We were getting Dracut errors and dropped into a dracut shell. Pretty scary stuff, and after a few hours of trying things the solution was really easy.


* From the GRUB menu, select the last one (the rescue)
* Then login, and do so as root or as a normal user and run su -
* run the following command:

dracut --regenerate-all --force

Then reboot the box.

Leave a Comment more...

SSLs.com Review – SCAMMERS BE CAREFUL

by on Dec.29, 2019, under Uncategorized

I just wanted to write a review about my last interaction with ssls.com. Previously, we had always bought certain products from here and they were reliable, helpful, and always delivered.

Unfortunately, they have now turned into a deceitful company and have been engaging in possible fraud. That’s right – buyer be warned.

A few years ago, the SSL community changed the max age of ssl certificated from 5 years to 4 years, to 3 years, now down to 2 year max. This meant that average order values for companies like ssls.com plummeted by 50%. Unfortunately, their customer acquisition did not drop at all, it even possibly increased during this time span as more competitors came to light. Additionally, LetsEncrypt also came about offering free ssl certificates and eating into market share.

Now here comes the genius minds at ssls.com with a bridge to sell you (reference to the old, I have the Brooklyn Bridge to sell you scam). They start offering a 4 year certificate for you to buy. That’s right, the max is 2 years, but they begin selling a 4 year cert. When you dig into it, it’s 2 x 2yr certs. Cool, that’s fine I thought. Being in the industry I knew 4 years were not longer available and they found a neat work around to get bulk orders and they walk you through the proces sof what they are doing. Nothing shady there. Your getting a discount for buying 2 certs and you understand that after 2 years you have to activate the 2nd 2yr cert. Cool. Fair Game.

But then I guess sales weren’t that good. Because why buy 2 x 2yr certs when you really only need a cert for now and who knows what is going to happen in 2 yrs + from now. So I’m guessing sales weren’t too hot, because now we get onto the scam part. Now when you buy a 2yr certificate, they give you a 2 x 1 year certificates. They tell you to come back in one year to get your next year. Your not getting the 2 year certificate that you purchased.

That’s right, they make it a hassle to get your two years up front – they don’t even sell you it. It’s now 2 x 1yr certs you get. Which is not industry standard.

The problem here is that there is no mention of this on the product detail pages or anywhere else. They advertise and sell the product (a 2 yr certificate) as the same product that you buy anywhere else. They are a re-seller, so the same product is available elsewhere and when price comparing or shopping apples to apples, you really believe it is an apple your getting.

However, if you bought the 2 year certificate elsewhere, you’d be issued a 2 year certificate. You would not be issued a 1yr certificate and told to come back in 1yr to get a new certificate. Each time you come back is a hassle, you have to reactivate, possibly re-upload files, work with your hosting company, some hosting companies charge a fee for certificate installation and testing, etc.

So when your being told that your buying a 2 year certificate and then your being issues a 1 year certificate – that can only be explained as fraud.

If you want a 2yr certificate from them, your now stuck buying the fake 4 year certificate so you can get the 2 x 2yr certificates.

Imagine going to buy an Audi A8. imagine you price shop between two dealers and then you decide to go with the slightly cheaper dealership. Now imagine when you sign everything, and hand them cash, they bring you in the back and they give you an A4. They then tell you in a year to return this A4 and then they will give you another A4. That’s not what you bought! You bought an A8 – not an A4.

Plain and simple -this company has become a bunch of scam artist. Playing cheap tricks on their customers all in the name of a quick buck.

This has apparently started on December 11th (or so), 2019. I hope they see their scammy ways in 2020 and reverse ship. Otherwise previously loyal customers who have ordered 100+ certs will leave in droves.

Leave a Comment more...

Recovering from Partially deleted /var on Centos

by on Jul.18, 2019, under Uncategorized

We had someone do a rm -Rf /var instead of rm -Rf var inside a project folder. If your reading this, you probably know the pain.

The admin ctrl+c ed the rm after a few seconds/minutes when the command didn’t finish as quickly as he expected it to.

Lucky for us, we were able to run yum, and rpm -qa still worked.

If your in the same boat and


yum info

works, we have a shot of rebuilding the box.

The first step is to run


rpm -Va > /root/missing.txt

Once this completes you will want to print out just the missing stuff:


cat /root/missing.txt |grep missing

Next you will want to use a multi line select tool or sed or something to take each line that starts with missing and then populate rpm -q --whatprovides around it. For example:


rpm -q --whatprovides /var/account
rpm -q --whatprovides /var/lib/mailman
rpm -q --whatprovides /var/lib/mailman/archives
rpm -q --whatprovides /var/lib/mailman/archives/private
rpm -q --whatprovides /var/lib/mailman/archives/public
rpm -q --whatprovides /var/lib/mailman/data
rpm -q --whatprovides /var/lib/mailman/data/sitelist.cfg
rpm -q --whatprovides /var/lib/mailman/lists
rpm -q --whatprovides /var/lib/mailman/spam

You will now want to run these commands and take note of the rpm's returned.

For example:

[bessig@after ~]$ rpm -q --whatprovides /var/lib/mailman
mailman-2.1.12-26.el6_9.3.x86_64
[bessig@after ~]$ rpm -q --whatprovides /var/lib/mailman/archives
mailman-2.1.12-26.el6_9.3.x86_64
[bessig@after ~]$ rpm -q --whatprovides /var/lib/mailman/archives/private
mailman-2.1.12-26.el6_9.3.x86_64
[bessig@after ~]$ rpm -q --whatprovides /var/lib/mailman/archives/public
mailman-2.1.12-26.el6_9.3.x86_64
[bessig@after ~]$ rpm -q --whatprovides /var/lib/mailman/data
mailman-2.1.12-26.el6_9.3.x86_64
[bessig@after ~]$ rpm -q --whatprovides /var/lib/mailman/data/sitelist.cfg
mailman-2.1.12-26.el6_9.3.x86_64
[bessig@after ~]$ rpm -q --whatprovides /var/lib/mailman/lists
mailman-2.1.12-26.el6_9.3.x86_64
[bessig@after ~]$ rpm -q --whatprovides /var/lib/mailman/spam
mailman-2.1.12-26.el6_9.3.x86_64

We can see we need to reinstall mailman-2.1.12-26.el6_9.3.x86_64.

So fire off:


yum reinstall mailman-2.1.12-26.el6_9.3.x86_64

Sometimes you may get a message from yum telling you it can no longer find that package. In this case you may need to run:


yum update mailman

In rare cases, you may need to find the rpm and manually reinstall it.

In our case, mysql was completely messed up, so we had to reinstall mysql, and then bring over the backup dumps and reimport them. Luckily for us, our most recent backup completed 4 minutes before this happened.

Once you think your done, rerun


rpm -Va > /root/missing.txt

and make sure this returns nothing:


cat /root/missing.txt |grep missing

Hope this helps. There's also a way to recover from a fully deleted /var directory (where the admin doesn't cancel the command).

I have done that before and will hope to do a write up in the future.

Leave a Comment more...

Accept.js error E_WC_14:Accept.js encryption failed

by on Jul.06, 2017, under Uncategorized

I much like many people over the authorize.net forums was receiving this error. Authorize.net seems to be clueless on what this error is or what it even does. Their official documentation even skips over this error message, not even mentioning it.

The problem is that the Accept.js library will catch ANY js error and throw its own handler which will give out this code. What does that mean? It means that even thought Auth.net is telling you there’s an error with thier response, its most likely a JS error on your page.

For example, in my case I had something like this:


input type="input" id="firstName"
...
$("#first_name").val();

The problem above is that #first_name is undefined because I am using camel case when I define it. So in this case, I had a JS error on my page. Unfortunately Authorize.net’s Accept.js module catches ANY js error and throws this silly error message that makes you scratch your head and think anything from SSL, to the tokens are wrong.

Hopefully this post saves someone time. I know this problem ate about 2-3 hours of my time. Looking at the auth.net forums, it looks like many people have wasted way more time.

Leave a Comment more...

Wicker Patio Furniture

by on Jun.20, 2017, under Uncategorized

We began recently working with a new client called Wicker Warehouse, located in Hackensack, NJ. They’ve been in business since 1978, so they know a thing or two about Wicker Patio Furniture, Outdoor Wicker Dining Sets, and Wicker Chairs. Needless to say I know where I will be buying my outdoor furniture from now. The biggest difference is that this stuff is made of quality. The stuff you see on overstock and amazon for a hundred or two cheaper won’t even make it past 2 years. From a marketing perspective it makes an interesting challenge for us – because most of our clients want repeat business. Unfortunately, the product is just so darn good, the repeat business is many years away, so we have to focus on new customer acquisition always! It’s making for a fun and challenging project for sure.

Leave a Comment more...

[Solved] AMD FX-9590 Lockups, System UnStable, retsarting itself.

by on Mar.27, 2017, under Windows

I built an AMD FX-9590 System almost 1 year ago, and I had no problems for almost one year.  Then I started having a ton.

 

 

Random Restarting

The first issue was that the computer would randomly restart itself.  After going through a ton of debugging steps including reseeding and replacing ram the root cause of this was due to overheating.  I have a corsair water cooling system and dust had built up on the radiator unit and was causing the radiator not to cool down.  The radiator was so hot, one could not touch it with their bare hands.  A can of air and dusting that off, seems to have resolved that issue.

 

System Instability

About a week or two later after the random restarting, the computer would just lock up.  Sometimes it would lock up 5 minutes after a reboot, other times it would make it a whole 4 or 8 hours.  Even sometimes a full day or two.  I tried a bunch of things, but what really seemed to work was the following.  Please note, I did all these things at once, and since they seemed to work, and I don’t care to play around with the machine anymore, I am keeping all of these things set!  So i don’t know exactly which specific setting or set of settings fixed it.

 

  • AI Tweaker-> AMD Turbo Core – Disable
  • Disable Cool N Quiet
  • Auto Boost – Off
  • C1 and c6 states – disabled
  • Installed, and mounted a side case fan, that flows directly onto the northbridge heatsink on the motherboard.

Since I have done the above, the system has been stable, for 4 days now, and has not locked up once.  I am writing these to hopefully help someone else.  Many of these tips I found on tom’s hardware with people having the same issues – so big thanks there!

UPDATE: the lock ups came back. I wound up re pasting the cpu with new thermal paste and cleaning out all fans and dust from the radiator. I have not had a single lock up since.

Leave a Comment more...

check_mk local checks not working

by on Apr.25, 2016, under Uncategorized

So today I used a mysql replication check found here:

https://gist.github.com/jleggat/1349602

But I couldn’t get it to work.  I created the /usr/lib/check_mk_agent/local/ folder that all of the documentation says you need to put the script into.  I reinventoried, and did everything.  It wasn’t being found.  Then after an hour of searching I came across this gem:

check_mk_agent | head | grep Local

It told me that my local folder was in a different spot:

[root@web2 check_mk_agent]# check_mk_agent | head | grep Local
LocalDirectory: /usr/share/check-mk-agent/local

 

I moved the script there and wham!  It worked!

Leave a Comment more...

Debragga Magento Conversion

by on Apr.23, 2016, under Magento

So we’ve recently worked on converting Debragga.com into a Magento site from a very uncommon platform called vsadmin.  It’s been absolutely stellar.  The design and everything else transferred over, so it was mainly just a back end conversion.  But we’ve seen seo rankings increase, ease of updating products and descriptions, and the availability of all the opportunities that the magento eco system has to offer.

We’ve configured everything as bundled products since everything in the warehouse is a single “part”.  Everything on the site is then broken down into a package containing the sku’s of all the parts that make up the package.  This has helped simplify the pick and pack process tremendously and allowed for quicker training and the allowance of seasonal pickers and packers to assist during busy times.

Anyway, if you haven’t had Dry Aged Beef, or Kobe Beef, I definitely suggest you head over to Debragga and get some!  Your mouth will thank you.

 

 

Leave a Comment more...

Magento 2 Attribute ‘setup_version’ is missing for module

by on Nov.29, 2015, under Magento, Web Development

WHen greeted by the error message:

Attribute 'setup_version' is missing for module

This is because the attribute schema_version has changed to setup_version at some point in the M2 development cycle.



Should now be:



Leave a Comment more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...