Categories
Community

Websites Making A Difference

Webmad got its name because we wanted to be a part of making change in this world. Websites Making A Difference. As part of fulfilling that vision every now and then a good cause comes up where we can offer our services to help the community

One such cause came up recently. Our local community is currently getting a new public pool – the South West Leisure Centre. This is all rather exciting for the community, as we lost a great resource many years ago when the Sockburn pool was closed down. When the pool was being planned, submissions for features were put out, and one of those features which just didn’t make the cut in terms of funding available from the local council was for a hydrotherapy pool to be added into the complex

What is a hydrotherapy pool? Well its a heated pool commonly used as part of physical rehabilitation or therapy to enable weightless movement of a joint. It is useful for lots of things from arthritis relief to recovery from injury. Having the ability for local residents to benefit from this feels like a great thing to assist with.

When the local rotary club came to us with their plans to fundraise to help ensure this great facility is added into the new pool complex, we jumped on board. and that is where https://hornbyhydrotherapy.nz has come about. There is a goal to get to $1.4m funding to make this happen, and as of writing this the team are at $301k so making good strides into the funding journey.

The website is purely a front to the fundraising operation – it has been designed to be visually appealing, have all the information you need to get on board, and links off to the main online giving platform, givealittle. Built using WordPress, the Hornby Rotary team can update the site as needed along the fundraising journey, and it provides a centralised point to direct people to.

We are proud of our association with this cause, as one of the many things we are looking to do in our community to give back, and help make it a better place. If you feel like you’d be keen to help, let us know.

Categories
Development

Too many options!

Ran into a peach of an issue on friday – client complains of a report being ‘glitchy’. They have a learning management system which contains a report with a filter that lists all student ID’s in an HTML select element. No problems you’d think… until you realise there are 13,000 plus students, all listed out as options within the select element… and the page has performance issues trying to load all the options.

What can we do?

So my first instinct was to throw some javascript at it, and convert the problem from being a select element to being a div with some javascript to handle displaying a much smaller subset of options depending on what is being typed – Select2 is my go-to javascript select replacement library, so we threw that at it.

Did it work? No. Same performance issues, but they were less as you filtered down the result set by entering digits… So its a bit of a better look, but it still ain’t right. What a pickle! Where to now?

After a period of trawling Google search results for a solution, I’ve stumbled upon the datalist html element. There is talk of it being a more efficient option as a select alternative… so here is what I’ve done:


jQuery('#id_filter_fuserfield_idnumber').after('<input type="text" list="filt_students" id="id_filter_fuserfield_idnumber_entry"/><datalist id="filt_students"></datalist>');
var opts = "";
jQuery('#id_filter_fuserfield_idnumber option').each(function(k,v){ 
    opts += "<option "+jQuery(v).attr("selected")+">"+jQuery(v).text()+"</option>"; 
	if(jQuery(v).attr("selected")){
		jQuery('#id_filter_fuserfield_idnumber_entry').val(jQuery(v).text()); 
	});
}
jQuery('#filt_students').append(opts);
jQuery('#id_filter_fuserfield_idnumber').hide();
jQuery('#id_filter_fuserfield_idnumber_entry').on('change',function(){
	jQuery('#id_filter_fuserfield_idnumber').val(jQuery('#id_filter_fuserfield_idnumber option:contains("'+jQuery('#id_filter_fuserfield_idnumber_entry').val()+'")').attr('value'))
});

So to explain this:
– add text input and link empty datalist after the select element (note no name on the text input so data not submitted with form).
– populate the datalist with items from the select element. If an item from the select options is marked as selected, update the text box to reflect that.
– hide the original select element.
– listen to changes on the text input so that when a value is chosen, the select has its value updated to match the chosen value.

Testing this, it is really fast. Like – really fast. No performance issues with 13,000 options to chose from, and all behaves as expected very similar to a select inpout, but you have the advantage of being able to use the text input to start typing and filter the list of elements to chose from accordingly. Vary cool!

In summary – don’t always reach for Select2 to solve select problems – the datalist element is your friend, and offers similar basic functionality (yeah I know – select2 can do a bunch more esp when it comes to ajax loading etc, so horses for courses, but if you don’t need the extra functinality…). I know I’ll be looking at using datalists more going forward.

Categories
Hosting Security

Vulnerability

Being vulnerable is powerful starting point for learning and change. When it comes to software, this is also the case, but never in a good way or at a convenient time. This past week we have been notified of a relatively serious vulnerability in the WordPress website content management system. In this post we will explore the issue, the resolution, and actions we can take to ensure exposure is minimised.

So mid afternoon NZ time on the 7th of Jan, 2022, we were alerted to a new wordpress security vulnerability – https://vuldb.com/?id.189817. The vulnerability report is kinda bland but here is a more detailed explanation.

What researchers found, was that an authenticated wordpress user with the ability to manage tags, or categories, or content metadata (attributes associated with content) could manipulate the data so that additional commands could be sent to the database server. So for example (and this is crude and untested), someone could fashion a request that contained an ‘IN’ component containing “‘; truncate table wp_users;”, and you would find that you’d lose all the content from your wordpress users table, and that is kinda unhelpful.

Once this vulnerability was found, the wordpress development team have been working on how to patch this to prevent issues. The patch has been published at https://github.com/WordPress/wordpress-develop/commit/c09ccfbc547d75b392dbccc1ef0b4442ccd3c957 but to explain what they are doing, basically they are replacing any space characters with underscores in the inputted data, so it all looks like one word, and an sql statement cannot be made.

So how serious is this vulnerability? Well it depends how much you trust your users, and how much access you give them to curate their own datasets. So for your average wordpress site that is just a brochure site, no user logins etc, there is minimal risk, as you are curating the content for the end user, therefor the requests will be safe as you won’t want to scupper your own site. But giving users the power to enter or alter data, well that’s where it can all go wrong if such capability is in the wrong hands, and if exploited, could mean very serious implications for your database.

For all of Webmad‘s hosted wordpress site clients, we have patched this vulnerability on all potentially effected sites on our managed servers, eliminating the risk here, but if you have a website you think could be at risk, certainly get in contact and we can patch / update your site too.

As per other posts on this site, the key is to keep your website software updated to the latest versions, so that any security issues are found and repaired as soon as possible to reduce vulnerability. Much of this either requires keeping up to date with the current threats by following threat boards etc, or ensuring you have a regular update schedule for your site. With wordpress you can also turn on auto updating of websites, which helps automate the ‘keeping on top of things’.

Categories
Hosting Security

Log4j and global panic

Now-a-days, the world is getting used to things being thrown at it to worry about. And we all hope that smart cookies in a lab somewhere will find a cure. Well – a couple of days ago, some boffins found a new computer bug that is being given hazard level 10, and I can assure you – that gets us geeks all rather excited

CVE-2021-44228, or the Log4j bug, was first published, with a patch, on the 9th / 10th of December. This vulnerability, which was discovered by Chen Zhaojun of Alibaba Cloud Security Team, impacts Apache Log4j.

Yip – that’s all foreign language to most humans, but the long and short of it is, this is a fresh vulnerability found in a piece of software very commonly used across the world for storing software activity logs, that allows anyone without permission, access to hijack a computer system and effectively run their own commands – from establishing a ransomware attack on a host, through to compromising secure user records etc.

The vulnerability has been shown to be active in software that uses the log4j software as well – from well known names like Apple iOS (yep – your mobile phone / tablet), MacOS, VMWare, Discord, Ubiquiti etc – A list is starting to be collected via https://github.com/YfryTchsGD/Log4jAttackSurface – a patch has been released to counter the attack, but the slower people are applying the patch, the more exposed systems are, and the more havoc that can be applied globally.

So what can we do?

  • Check for, and apply, any updates from software manufacturers. Always make sure you are running the latest versions of everything. This is paramount for both your security and your piece of mind.
  • Consider application of a strong, secure firewall to block potential threat traffic from getting to your systems
  • Contact any providers you use that could be storing sensitive information and seek assurances that they have taken appropriate measures to counter the risk associated with this new threat

Here at Webmad all of our hosting systems have been secured against this threat, simply because we are not using any services that rely on Log4j, and any of our upstream providers have been quick off the mark to get this resolved. Should you have any concerns though, by all means get in contact.

Categories
Hosting Security

Never trust an email

Over the last week, some of our shared hosting clients have been targeted by a rather complex email attack that is focusing on clients using cPanel based hosting, like we use at Webmad.

The attack first detects if the website hosting is cPanel based, and then if it can locate a contact email address form the website, it emails the contact with an email that looks like a legitimate cPanel disk space usage warning email, requesting you take various actions to protect your website from downtime.

This typically looks like the following:

So the key components of the email to look out for are:

  • If you hover your mouse over the links in the email, they are not the same as the link text. This is a huge red flag, as it is misleading you as to where you think you are being directed.
  • The From address always has ‘no-reply@’ at the start – most hosting providers will customise this so it comes from them, not from your own domain name
  • The disk usage percentage is always over 95%

Please ignore these emails, and if you have followed any of the links, do let your hosting provider know as soon as possible, as it is possible that details you provide on the links will lead to compromising your websites hosting security – its best to work through with your hosting provider the best course of action from here.

For Webmad hosted clients – we don’t actually have set disk quotas on our hosting, so we can assure you you will never receive any legitimate emails like this from us – we prefer to contact you directly, using humans not automation. Contact us if you ever have any concerns.

Stay safe out there everyone!

Categories
Hosting Technology

Aaargh! Facebook is down!

What a shock to many – their worlds coming crashing down as the need for social interaction is unable to be met by the worlds most commonly used social networks, all owned by Facebook. Today (5 Oct 2021) – many here in New Zealand have woken to a worldwide outage, visiting the site is complaining about a DNS / Domain issue, and a white screen that no doubt has some rather high paid network engineers at Facebook, having kittens.

So why is it down? Well – that’s the question everyone is speculating on, and much of it comes down to the core structure of the internet, and how they are harnessing tools to give us all the best experience possible. The most likely reason is what I’ll be walking us through today.

How does the internet work?

Well it all starts off in your internet browser – Google Chrome, Mozilla Firefox, Internet explorer / Edge / whatever Microsoft are calling it now, Apple Safari. Lots of options, all work the same way. You type in a web address (URL) into the address bar, and hit enter, and within seconds the page you want renders in the browser, and we carry on our merry way. But there is a bunch of communications that goes on within those few seconds that helps make this all work.

The first part of this is the address translation. There is a global system called DNS (the Domain Name System) which translates what you have typed in (ie https://webdeveloper.nz/ ) into a series of numbers called an IP address. The servers that store the website data each have an IP address that they respond on, and deliver the web pages back to you. Its a bit like your phonebook. I want to call someone by this name, so please give me their phone number to do so.

Once the address translation has happened, you can talk directly to the servers and get the data you need to render the web page. The faster this translation happens, the faster your website will load for the end users. And this is where the problem is believed to have happened for facebook today.

Where has it all gone wrong?

The way that normal IP addressing works is that one server typically has one IP address. It is unique, and you can get a bunch of details from it (check out https://ip-api.com for some of this info). The downside is that a single IP address typically translates to one server, that may actually be on the other side of the world to you. And because light can only travel so fast (ie the internet backbones that link us all together via fibre optic cables) there is a delay talking from little old NZ through to big datacenters in the USA or Europe.

What some clever clogs has worked out though, is that you can use Content Delivery Networks to reduce the physical distance between your web servers and your customers around the world, making websites load so much quicker. Yay! But that is only part of the equation. This works for website content, but it doesn’t work for the DNS lookup / translation aspect. And this is where we get to BGP routing. This is where we believe the outage has been caused today.

You’re getting technical…

BGP Routing or Border Gateway Protocol Routing, is a fancy way of allowing one single advertised IP address to be shared by multiple servers globally, which can then serve website clients from the closest possible geographic location. As there are lots of servers that can serve the data of the one IP address, it can be very fault tolerant, and increases speeds of users getting website addresses translated to IP addresses so that the traffic can be routed to the right places and the websites work

In todays outage, the hardware that does this BGP routing globally for Facebook, allowing them high website speeds, has been misconfigured / lost its configuration. What this has meant is that anyone trying to do lookups / translations of any of the facebook operated web addresses, are getting blank screens with their browsers telling them that they can’t find the domain name.

As I write this it looks like things are slowly starting to resume normal operations after 4 and a bit hours – there is a facebook branded error page now, so we are at least seeing facebook servers again, but I suspect the next issue they will face as they slowly bring the site back online is the large influx of people accessing the sites after their drought, and trying to catch up, effectively swamping their servers

What can we learn from this?

  • Firstly – in the internet world, you are never to big to fail.
  • Secondly – the world is still ok without social networks.
  • All the geekery in the world (CDN’s, BGP Routing etc etc) won’t necessarily save you from good old fashioned human error, although it does help reduce its occurance.

Here at Webmad we are well versed in using these various tools to get you the best outcomes and speed for your website, using trusted providers, and offering proven results. We’ve run sites using BGP failover routing to offer high availability geolocation aware systems within NZ, we use CDN‘s all the time, and we can quickly pinpoint where issues might be, and how to fix them. Could we fix Facebook’s troubles? That’s a bit above our pay grade, but we can definitely put our knowledge to great use as part of your web team. Drop us a line to get the best results for your online assets.

Categories
Interaction Security Technology

Cookies – trick or treat?

One of many annoyances of the internet these days is the dreaded ‘Please accept our cookies’ popup you see on a great number of websites, warning you of the intention of the site you are visiting to give you things called cookies. They sound soo sweet, digestable, and innocent. But how many of us actually know what they are, how they are used, and if they are dangerous or not?

So – what is a cookie and why are they on the internet?

A cookie, in the internet sense, is a wee fragment of data that a website can store in your web browser for a defined period of time. This can be until you close your browser, it can be days or weeks. Once a cookie is stored on the end users browser, that cookie of information is sent to the server with every new page request or interaction with that websites server. Cookies are restricted to only send data back to the domain name that set them. A cookie is unique to each user – they may store the same information, but because they are stored on the end users device, they are unique to that user.

Where they get powerful is that website developers can store data in a cookie that enables them to customise our browsing experience on their website. Typically what this looks like is when a user has logged in to a website a token is stored on a cookie for that user session so that every subsequent request to the server can prove that it is from the logged in user, and the server can customise its response according to your profile and stored settings. This is really useful.

Where this can get risky though, is when you visit websites that use advertising networks. Advertising networks can set cookies on your computer to track what websites you have visited, and your preferences so they can target you with ads for things they think you need. This is seen as predatory, and can give these networks a huge wealth of information about you and your online habits. The more websites an advertising network is used on, the more data they can collect.

Its this predatory use of cookies on websites that has given cookies their bad name. Cookies as an object are quite harmless – they do not contain code that gets executed or anything dangerous, but they can store information that can be used to identify individual users and ‘follow’ them around. To break up the amount of data that can be used to identify a user, it is recommended to either use a cookie blocker in your browser that can determine if the cookie is from an advertising network or not.

While cookies are generally safe to accept, websites in many geographic locations nowadays need to request the users permission before they can store cookies in their website browsers. The lawmakers in these regions pass laws to make this mandatory for sites doing business in these regions so that their people can make informed decisions on what information can follow them around on the internet.

If you visit a website that you know you won’t be logging in to or signing up for, then there is no need to accept the cookies on that site. If you are keen to interact with the site, and have a customised experience, then accepting cookies is quite fine. You can always clear out cookies from your browser at any stage – the process varies depending on what web browser you are using, but you can view the content of any of the cookies, and delete whichever ones you prefer.

Categories
Hosting Technology

What is a Content Delivery Network (CDN)?

This past week the buzz-word floating about internet related conversations has been the drop out of a huge chunk of the internet related to an outage from the CDN provider Fastly. A good number of websites went out world-wide, and high traffic sites experienced either total outage or parts of their networks unable to be reached. It felt like a digital apocalypse for many. For some of our clients there was glee as their competition were taken offline by this outage. In the end, it was only for an hour, and late in the evening New Zealand time, but it still caused panic.

So how did an outage at a company no-one in the general public has really heard of before, cause such a ruckus? Well to get to the bottom of that we need to get a better understanding of how the internet functions, and some of the tips and tricks that webmasters employ to get their content in front of their users as quickly as possible so as not to lose users.

When someone goes to a website on the internet there is a flurry of communication between their device and various internet services to then serve the web page. Here is a rough pictorial guide to what happens:

Once the user has told their web browser what website they are wanting to view, requests are fired to Domain Name Service (DNS) servers in order to translate the address entered into an address that computers understand (an Internet Protocol (IP) address). That information is then used to talk to the appropriate server (or load balancer if the website is big enough, which then directs traffic to an available web server) to return the web page you have requested. That page may have a number of images and fonts and scripts linked to that all need downloaded in order to display the website you have requested on the device you are requesting from.

That’s a bit of the background behind how the internet works for websites. But where do CDN’s fit into this mix?

Ever called someone overseas and noticed the delay between what you say, and their response? This effect is called latency. It’s the delay between your initial request, and you getting a response. Even with a global network using fibre connections, which are as fast as the speed of light, if I request a website on my device here in New Zealand, and it is hosted in the UK, every request to the web server is going to take at least half a second just to get from my device to the server and back, and that does not factor for any processing time on the web server slowing things down as well. If a web page has 30+ media assets, which is very common now-a-days, the website will feel almost unusable. The further away a server is from its users, the slower it will be able to respond to user requests.

This is where CDN’s come in. A global Content Delivery Network is a network of computers located around the world. These computers are set up as a cache for the websites you are visiting. Website owners tell their domain names to resolve to the servers of the CDN instead of the origin servers, and then the CDN is configured to know how to get teh requested content from an origin server where the content is hosted. So, the first time you visit the website, the CDN server which is geographically closes to you, fetches your content from the origin host. It also keeps a copy of the content that the origin server has served, so if anyone else needs that content, it can return it directly instead of needing to route the request to the other end of the globe. This has the end effect of the website appearing to be served from the location of the CDN’s server that is closest to you. So each request to the web server now takes 50ms instead of 500ms+ The more ‘edge’ locations the CDN has, the better the chances of them having a server as close to you as possible.

The other advantage of CDN’s is that you now have a pool of servers serving your website traffic, so if one edge location drops into an error state, other servers can take up the slack, without the need for a huge amount of traffic back to the origin server, adding load.

CDN’s also get around a bit of a flaw in the way that internet browsers load media assets from web servers. Most web browsers will load content in a ‘blocking’ way, meaning they only open up a maximum of 10 connections (typically its only 4-6 connections without tweaking) to a remote web server / domain simultaneously. This means you have to wait for one asset to complete download before you can fetch the next one. Using a CDN, all assets can be downloaded simultaneously in a ‘non-blocking’ fashion, so page load speeds are vastly improved here too.

Due to all of these advantages, it makes a lot of sense for websites being served to a global audience to use a CDN to make their websites quicker for their end users wherever they are in the world. And there are a number of providers that offer this service to website owners. Some you may have heard of, like Cloudflare, Akamai, and Amazon’s Cloudfront. Fastly is another provider in this space that has a huge number of servers scattered around the globe, and boasts very impressive latency figures worldwide, which is how it has become popular with a number of larger websites around the globe.

Knowing what we know about CDN’s now, it becomes easier to understand how half the worlds websites dropped out. The official line from Fastly is that a configuration error caused ALL of their CDN servers to refuse to serve any website content. It took an hour to resolve. If this had have been one or two servers then the CDN would have healed itself nicely and no-one would be the wiser – sites may be a little slower for some locations, but generally it’d be fine. But if you push out a global configuration that wipes out the function of all your servers, there is no saving that until you push out a revised configuration that undoes the breaking change. The more clients you have, the more websites are effected. From this outage, its easy to see that Fastly have a large client base around the world, and no doubt they are now contemplating their options for reliable CDN providers.

If you need help getting your websites working at optimal speed in front of a global audience, using trusted CDN partners, get in touch with Webmad and we’ll help you plan and implement solutions for optimal performance.

Categories
Technology

What is a Progressive Web App?

For a long time, mobile apps have been the in thing. Businesses needed mobile apps to engage customers. To get your brand on their phones. But mobile apps have for a long time been expensive. And you need to develop an app for each of the various mobile environments – Apple’s iOS and Google’s Android.

The problem with a lot of these apps is they typically don’t actually need to be traditional apps. The only reason to have a proprietary app developed for the various mobile environments is to enable interaction with hardware on the device. Things like working with bluetooth, audio or customising use of the devices camera. Most apps that have been developed don’t need this, and this is where progressive web apps (PWA’s) can offer a cost effective solution.

Most of the functionality that these apps need can easily be covered with a web page. Doing this gives universal compatibility between mobile devices, desktop computers – basically anything with a web browser. This means developing for one environment, and knowing it will work everywhere. This takes much less time, and as its using standard web formatting, there is a much wider available pool of developers who can assist.

The biggest hurdle to using web technology on mobile devices has always been that its doesn’t work when there is no connectivity to the internet. Thankfully this is where progressive web apps come into their own. Progressive web apps add a layer of functionality that allows offline caching of data, both with the use of databases embedded into the web browsers themselves, and tools to detect if we have connectivity to the source web servers or not in order to use the local (on device) storage or not.

The other advantage of progressive web apps is that they are now accepted in both of the mobile environment application stores. Standard web pages don’t get that luxury. Standard apps have a long approval process for each and every update you release through the app stores, whereas PWA’s you can update on the fly whenever you need, so any security or bug fixes are on-device the next time the users device has internet connectivity. This is a major improvement especially if you were to release into production with any issues – waiting a week or so to get an update approved can be fatal to your brand.

So – PWA’s are cost effective, have wide compatibility across devices and platforms, and are easier to maintain long term. If you don’t need any hardware integration outside of what a standard web browser can do, then they make a lot of sense. If you are in need of an application for mobile devices, get in contact and we can talk through the various options, and what will suit your needs best.

Categories
Security

What is 2 Factor Authentication (2FA) ?

Its become increasingly popular for websites these days to request two factor authentication to be added to your login for extra security. This is a good thing… but why? And what is 2FA?

There are lots of different ways to authenticate yourself. These get lumped into 3 main groups, called factors:

  • Something you know ( ie a password or phrase you can remember )
  • Something you have ( ie a device that you have with you that can give a code to assist with authentication, or something like a credit card )
  • Something you are ( ie a fingerprint or facial recognition, or an iris scan like in the movies )

Soo – knowing there are 3 possible factors that can be used in authentication, 2 factor authentication is simply authentication that uses a method from 2 of the main authentication type groups. Generally the ‘something you are’ type verification is tricky to implement – some cell phones and laptops have fingerprint verification, some mobile phones boast facial recognition as well… but in practice this is fairly hit and miss… you burn or cut your finger and you are locked out, or you wake up in the morning looking a bit rough, ad you are locked out.

Typically 2 factor authentication in the real world is done using a password or pin number (something you know) and something you have (either a mobile phone with an app on it or something like a credit card). Your EFTPOS card has had 2FA since waaaay back. The internet is just catching up. Its coming from a place where all you had to know was a password – a password for your email, a password for your banking, a password for your computer login… and they all must be unique and 8+ characters long with a capital and a number and a symbol and your first pets name and… well the list goes on. All things from the ‘stuff you know’ pile.

So to bring in the ‘Something you have’ group, what most places do now is they rely on your smartphone to be able to provide the ‘something you have’ component – most people live with them on their hip, and they are easy to code for. Either send them a text message (sms), or write a mobile phone app that can provide a code that only the web server and the mobile phone app can validate its a particular user.

Why is this so much better than single factor authentication?
It is becoming increasingly easy to brute force decrypt a password. Heck – some poorly written websites have even been known to store passwords in plain text, so they are humanly readable if you get access to the storage that holds them. By adding 2FA, even if someone did manage to work out the password, they won’t have access to the device that completes the authentication, so whatever it is you are protecting with authentication, is still safe as long as it requires both password and a second factor.

If you’ve got a website that you need to secure, we strongly recommend 2FA if possible. I know some people who can help make this happen