Welcome to Vastspace, provides Reliable Web Hosting since 2014

Welcome to Vastspace

Blog

Blog

Find out who is eating your traffic?

When you click….wait, click again….wait again, click again and again….argh… 🙁 Sound familiar? That’s the sound like somebody has ran out of Internet bandwidth.
There’s a lot of things can drain out the capacity of that pipe that connects your web service to the Internet. It could be your bad neighbours on your network, or it could even be malicious application or services are running on the same network. The problem can get so bad that some customers will toss out their computer and buy a new one.
However, many web hosting subscribers are not aware this can be avoided and the problem could lie with your web host network setup. If you have noticed there’s a characteristic and pattern of this network hiccup, and it always happens on a specific period daily, maybe during wee hours?

If this is the case, your web server might be relying on a single network cable to perform all the tasks which includes internet, back-end management and backup. Yes, you hear me, it’s done on one single cable where your public internet traffic and backup traffic co-existed.

This poses 3 serious issues;

1. All your stored information is transmitting publicly during backup. This increases the risk of exposure on personal data.

2. The other serious event, backup traffic has clogged up your traffic to the internet. Generally speaking, backup traffic volume is much higher compare to the genuine internet traffic. When this happened, connection to your website becomes slow or inaccessible.

3. Since backup traffic uses the same network cable for your internet connectivity, the consumed traffic might have included your backup traffic. Which means, your consumption might be way lesser than what you were billed.

Vastspace segregates traffics into internet traffic, backup traffic and management traffic on three network segments. Each segment has at least one network cable for each kind of traffic.

Know the type of RAM used on your server

 

In general, many do not inquire about the type of RAM used on their server. What most really care is the “number”, like I’ve 32Gb RAM and blah blah blah…
Do you know there is a different type of RAM used on desktop computers, entry servers and high-performance servers? It is important to find out that your server is fitted with ECC or better RAM for stability, reliability as well as performance for your 247 online activities.

Most branded entry dedicated servers use ECC RAM. ECC is an extension to parity uses multiple parity bits assigned to larger chunks of data to not only detect single bit error but correct them automatically at the same time. Instead of the single parity bit for every 8 bits of data, ECC uses a 7-bit code that is automatically generated for every 64 bits of data stored in the RAM. When the 64 bits of data is read by the system, a second 7-bit code is generated, then compared to the original 7 bit code. If the codes match, the data is free of errors. If the codes don’t match, the system will be able to determine where the error is and fix them by comparing the two 7 bit codes.

Registered memory has registers or buffers included on the module for better flow of data which increases data reliability. It also allows for greater memory scalability and larger amounts of RAM can be installed. Because of this, registered memory is used mostly in servers with ECC functionality.

Fully buffered memory or also known as fully registered memory takes some of the functions from the memory controller. The communication between the memory controller and the module is serial, thus less number of wires is needed to connect the chipset to the RAM. With serial communication, fully buffered RAM is possible to have up to eight modules per channel and up to six memory channels, this greatly increasing RAM performance as well as memory scalability. Fully buffered memory cannot be used on a server that takes registered memory or vice verse. Fully buffered memory includes ECC functionality usually seen on high-performance workstation and server.

It’s about time to visually inspect your server

 

When was the last time you have a visual check on your server internally?  Some might have heard about the capacitor plague but if you have not, and you have owned the server for many years already you should do a visual inspection on your server motherboard immediately.  The capacitor flaw was reported as early as 2002, and a surge of such complaints in 2010 with the higher than expected premature failure rate of aluminium electrolytic capacitors with the non-solid or liquid electrolyte capacitors from some Taiwanese manufacturers. The capacitors failed because due to a poorly formulated electrolyte with water-based corrosion effect.

Direct visual inspection is the most common method of identifying these capacitors which have failed because of a bad electrolyte. Failed capacitors may show one or more of these visible symptoms:
a. Bulging or cracking of the vent on top of the capacitor.
b. Capacitor casing sitting crooked on the circuit board, as the bottom rubber plug has pushed out.
c. Electrolyte leaked onto the motherboard from the base of the capacitor or vented from the top, visible as crusty rust-like brown in colour deposits.
e. Detached or missing capacitor casing. Sometimes these vents do not open, a failed capacitor will literally explode, ejecting its contents violently and shooting the casing off the motherboard board.

When this happens, the capacitors no longer able to serve their purpose on filtering the direct current voltages on the motherboard, as a result of this failure is an increase in the ripple voltage that the capacitors are supposed to filter out. This causes system instability. Capacitors with high ESR and low capacitance can make power supplies malfunction, and causing further circuit damage. On the server, the CPU core voltage or other system voltages may fluctuate and possibly with an increase in CPU temperature as the core voltage rises.

Even though there are not many cases nowadays and seems to have receded since 2013, I still urge you to have this inspection done quickly especially for servers was manufactured before 2010. Apparently, I’m still seeing many of these branded servers are prone to this plague in operation when I was walking around the data centres.

How often should you change your password?

 

Many organizations require mandatory password changes, consider this is best practice in security. However, this might not be the case anymore and there are many pros and cons to this practice. For those has been changing the password regularly, maybe it’s time for you to have a look having your password changed often makes sense and when it does not, and for who has done little on securing the password what should you do next.

Let’s get started with a strong password

Using a strong password is the most important thing you can do to help keep your account secure. Here are a few tips on how to create a strong password:

  • Use combination of letters, numbers and symbols if permitted
  • At least eight characters long.
  • Never use names of spouses, children, girlfriends/boyfriends or pets.
  • Never us your phone numbers, ID numbers or birth dates.
  • Never use the same word as your log-in, or any variation of it.
  • Never use dictionary words.
  • Avoid using the same password for all your accounts

Enforce Password duration policies but wait…

Many companies enforce their users to update their password every few months, it limits the usefulness of the stolen password. If your password has been stolen and you weren’t aware of it, the hacker could eavesdrop for an unlimited time and gather all sorts of information about you slowly or laboriously and cause damages to you. Thereby, for the last decades, many security policies have recommended frequent password updates.
But it might now be outdated policy to recommend and it’s highly debatable that updating password frequently does actually increase security.

Updating your passwords often has become a waste of time?

A study from Microsoft found that mandatory password updates cost billions in loss of productivity for little payoff insecurity and some other security resources point out that the security best practice is doing little security improvement but causing a lot of frustration. End of the day, users typically end up choosing or resorting to sticky notes and any form of easier and quicker ways to access their “secure” password but could actually increase “risk”.

Experts pointed out that in many cases today hackers or attackers won’t be passive.  If they get your account login, they probably won’t wait and hang around for months but likely they will access your account right away. In some cases, the hacker might be sticking around eavesdropping, not using your password but with installed backdoor access instead.

The next thing you would do to reduce your risk is to reduce the password update duration. But hold your horses, hackers have machines that can break 348 billion NTLM password hashes a password encryption algorithm used in Windows per second and any 8 character password could be broken in 5.5 hours,  and if your account is being targeted, what makes you think that reducing the password update duration would possibly reduce your risk? It’s not possible and not worth doing this crazy event that kills your brain cells on a daily basis.

Good reason to beef up your security with Two Factor Authentication

Two-factor authentication is one of the best things you can ensure your account doesn’t get hacked and invest less time and frequency updating your password, eventually less hassle and frustration. It’s more important and above that, you choose a unique and strong password for your accounts. Two-factor authentication is a simple feature that asks for more than just your password. It requires both something you know and something you have in personal belonging like a cell phone. After you enter your password, you will get a second code sent to your phone or an application like google authenticator generates 2-step verification codes on your phone, only after you enter it will get into your account and keeps unwanted snoopers out of your online accounts.

At Vastspace, apart from the encryption layers on all our web channels for communications with clients. Our client portal is installed with Two Factor Authentication if you have an account with us follow this guide to enable it now.

SSL v3 (POODLE) Vulnerability

Google researchers announced the discovery of a vulnerability that affects servers with SSL 3.0 enabled. This vulnerability has been named POODLE (Padding Oracle On Downgraded Legacy Encryption).
The POODLE vulnerability does not affect your SSL Certificates and you do NOT need to reissue/reinstall your SSL Certificates.
DigiCert and other security experts recommend disabling SSL 3.0 or CBC-mode ciphers with SSL 3.0 to protect against this vulnerability.

You can use SSL Installation Diagnostics Tool from DigiCert to check if SSL 3.0 is enabled on your servers.
For servers that have SSL 3.0 enabled, Security experts are recommending that you disable SSL 3.0 for the time being and use TLS 1.1 or 1.2 instead. Most modern browsers will support TLS 1.1 and 1.2.

If you use a hosting provider, we recommend that you call them and request that they disable SSL 3.0 on your server.
Servers that do not have SSLv3 enabled are unaffected.

Get closer to your clients with CloudFlare

I get phone calls often asking about CloudFlare. What is CloudFlare? The biggest part of CloudFlare, apart from the cool features to protect your website from a range of online threats from spammers to SQL injection to DDOS, it is about getting closer to your clients in global perspective, maybe in terms of online business connectivity.

Not too much of words to describe how your site can get closer to your clients, here’s a comparison on how vastspace.net with and without CloudFlare enabled.

With CloudFlare enabled, your website content are distributed to 28 data centers around the world. CloudFlare CDN automatically caches your static files using their edge nodes so these files are stored closer to your visitors while delivering your dynamic content directly from your web server. CloudFlare then uses a technology called Anycast to route your visitors to the nearest data center. The result is that your website, on average, loads twice as fast for your visitors regardless of where they are located.

You’ll be able to see the exact speed benefits and savings with your personalized CloudFlare analytics report at CloudFlare portal if you are a subscriber. On average, a website on CloudFlare loads twice as fast for its visitors, sees 65% fewer requests and saves 60% of bandwidth.

Most importantly, it costs as little as $15 /month for a site on CloudFlare Pro subscription at Vastspace. In business perspective, it’s bang for the buck.

 

Server Health monitoring, is your host doing it?

Server Health monitoring is important and effectively provides useful additional perspective when combined with Up-time monitoring; it assists Vastspace in preventing downtime with informative capacity planning, rather than merely able to react to predictive failure or failure events. Common scenarios resulting in a server failure include excessive CPU usage, insufficient RAM and excessive Disk IO operations.

Our Server monitoring team is a dedicated team solely focused on providing service reliability and react to incident in proactive manner. The team utilizes an industry proven set of system level health and service monitoring tools to ensure our servers’ are optimal performance through early detection of problems. In the event that an issue is identified, our Team will respond timely, reducing downtime and fix any issues proactively, before the client is even aware of the problem at most time.

 

 

Why Vastspace SSD VPS is better

Not all Cloud SSD VPS are the same, Vastspace SSD VPS server nodes are custom built and optimized RAID for redundancy and high performance. Vastspace SSD VPS server nodes use only Enterprise SSD Drives ensuring fast and consistent command response times as well as protect data loss and corruption.

We have done the read & write tests  for our Cloud SSD VPS against a popular SSD VPS,
Our Cloud SSD VPS starts with 2 CPU core, 1Gb memory and 30Gb of disk space. For the tests, we have ordered the closer specification we can get from this provider, 2 CPU core, 2Gb memory and 40Gb of disk space.

Both candidates are based on CentOS 6.5 x64 and hosted in Singapore Data Centers

 

Here are the results;