Welcome to Vastspace, provides Reliable Web Hosting since 2014

Welcome to Vastspace

Blog

Blog

How to protect your website?

 

A website is using WordPress, Joomla, Drupal is common. There is a huge collection of plug-ins, modules, and components. Most are free can download from the internet.
 
Because open source applications are free, they are a very popular choice example a WordPress website. 6 out of 10 websites are using WordPress. The installation script is available on the most popular hosting panel. A few clicks away, WordPress website will be ready for you
 
But do you know these websites are hackable? The vulnerabilities are in these open source CMS. Because the code on the CMS is readable by anyone. The bad guys will find its loopholes and exploit them.
 
So it is common to hear from someone, he or she has a hacked WordPress website. Can we protect it? Do we need to install a costly appliance?  In the past, engineers installed expensive equipment for combat web intrusion. Never think that the web protection existed with your web hosting which is not the case. In this modern world, cloud web protection is available at an affordable price.
 
There are 2 similar website protection services can do the job., Cloudflare, and Sucuri. Both are available at Vastspace. They have the same goal to filter any known or even suspicious malicious activities. Starts with as little as USD 20, you have CDN to speed up connections to your website and protects them at the same time. Not limited to DDOS attacks itself.
 
Cloudflare has more POPs than Sucuri. The connection to your website from many places is faster. But this is numbers on paper. Many cases, you cannot tell the difference because they are in milliseconds. I have tried both, they offered protection but I like Sucuri more.
 
I have a trial service on the Sucuri Website Firewall PRO and monitoring. From the control panel, you get to see the website health status after you have logged in. The information is something you will not have in Cloudflare. They provide you with an overview of the website health. Spamhaus status is good,  can use as a reference on your mail server RBL if they hosted together. Also, you can adjust the scan interval as low as 6 hours on the scan or scan daily as a routine.
 
At the website firewall, you will get an overview of allowed and blocked traffic. More useful options like access control,  security, performance, and SSL on the settings. For Cloudflare, I’m overwhelmed with the features. Most layman will want to pay you to solve their problem. After the initial setup, they hardly log in to tweak the settings. So, I felt that some of these settings might be too much for them to digest. On Sucuri, the most essential settings are available. Except that you might want to have a closer look at advance security option and protected pages at access control. These are good options if you have a WordPress or Joomla website if you want to protect sensitive URL.
 
But, there is a con on both setups. If they are not set up correctly, attackers can bypass this firewall. Eventually, your website is not protected. So, make sure you talk to a certified engineer.
Also, like your FTP, email service, webmail, and control panel can ruin too. Make sure you check this service and ask if there is any workaround.
Feel free to write to email@vastspace.support if you have questions about the 2 services. As their partner, we are glad to assist you.

What is BRTFS?

BRTFS is a file this is certainly contemporary that began development back 2007. It had been merged into the mainline Linux kernel in the beginning of 2009 and debuted in the Linux 2.6.29 release. BRTFS is GPL-licensed but currently considered volatile. Hence, Linux distributions have a tendency to deliver with BRTFS being an alternative although not while the standard.

BRTFS isn’t a successor to the default Ext4 file system utilized in most Linux distributions, nonetheless it to expect to displace Ext4 later on. A maintainer for Ext3 and later, Ext4, features stated he sees BRTFS like a better method ahead than continuing to count on technology this is certainly ext.

BRTFS is expected to offer better reliability and scalability. It’s a file that is copy-on-write meant to address various weaknesses in current Linux file systems. Primary focus things include fault threshold, fix, and management this is certainly easy.

What is DKIM?

Known as DomainKeys Identified Mail is for identification on email designed to prevent email spoofing. It allows the email server to check that the incoming email claimed to have come from a specific domain was authorized by the owner of that sending domain. It is intended to prevent forged sender addresses in emails, a technique often used in phishing and email spam.

In technical terms, DKIM lets a domain name associate its name with an email message by affixing a digital signature to it. Verification is carried out using the signer’s public key published in the DNS. A valid signature guarantees that some parts of the email (possibly including attachments) have not been modified since the signature was affixed. Usually, DKIM signatures are not visible to end-users, and are affixed or verified by the infrastructure rather than message’s authors and recipients. In that respect, DKIM differs from end-to-end digital signatures. Our virtuemail has the option for DKIM.  You can contact us o understand more.

Email delivery?

Someone is

telling me just type the recipient email address to send an email. Sound familiar to you? He or she is not wrong. To send an email you first must know his or her email address but did anyone tell you how an email is delivered and why he or she did not receive my email.

Let me explain how an email is delivered. Most people are sending an email knowing the only a recipient email address. But sending email are more than just are knowing the other party email address. The most crucial thing for sending email is actually the DNS. without DNS, the mail server is handicapped and does not know where and how to deliver your email. DNS consists of records, it is like a directory to tell you where you are hosting the mail server and where to find you. With the DNS, your email is delivered to the destinate mail server and mailbox eventually.

Sometimes your recipient will claim email was not delivered to him or her. How did this happen? There are a few common reasons why an email was not delivered.

  1. Mistyped email address, this common mistake that people made, an email address is mistyped and sent wrongly and was not sent.
  2. Email has gone to junk box and not knowing the email did arrive but was never in the inbox.
  3. Some mail server allows users to filter their emails because of spam emails. Sometimes keyword related filtering could have wrongly filtered your email.
  4. Your mail server IP is blacklisted by the popular DNSBL. It was not you but someone account is compromised and sending spam emails. This can cause your mail server IP to be blacklisted. At such situation, your email will be treated as spam email and bounced or filtered.
  5. Email is showing delivered as sent. This can be due to a bug in the mail client and it is not delivered to the mail server. This can happen to an outdated application.
  6. It is rare but it happened the email is caught in the sender or recipient mail server mail queue due to many reasons. When the email is caught in the queue it will not deliver to the user mailbox.

Which type of VPS?

Many have asked which type of VPS to choose and what are the difference? The propose of this post is to help customer to make the right choice. It is not because Vastspace have dropped OpenVZ and we start praising KVM. Vastspace has dropped OpenVZ is base on demand and it is difficult to manage 2 types of virtualization at the same time., Finally we have chosen KVM instead.

We are not saying OpenVZ is bad. Honestly, there are many advantages for hosting vendors like us. Hosting providers can put more instances on one node compares to KVM. As you know OpenVZ is sharing files and kernel. Theoretically, you are using less space or space is available to you dynamically. Even you are given 40 Gb of space but OpenVZ is calculating the space only you are consuming and not allocated.

Because OpenVZ is sharing the node kernel, you cannot reboot the virtualized instance at your own. In other words, you cannot update a kernel bug or security fix. This has to be waited till the virtualization distribution released and schedule to be updated as a whole but not individually.

OpenVZ allows the use of memory does not belong to you or has been allocated to you. There is mean to say, if you are allocated with 1Gb of RAM you might be able to use more. If you look at other angle, you are stealing others RAM. What happen if someone is stealing from you? This will only happen to unused RAM. In many occasions, RAM is taken from the node and the entire server freezes because of overage. All the VPS hosted within the server are affected due to poor management and this will not happen to KVM. However, there are much improvements in virtualization, there is such thing know as burstable memory, this can be done on KVM VPS.

There are some applications required non-shared kernel. For example, a real-time anti-virus which is essential for today cyber world, can only install on a KVM VPS and not OpenVZ VPS due to its shared kernel.

In real world, CPU are shared in OpenVZ . CPU are dynamically shared among the client machine. You can say it is burstable unlike KVM, you only can use what is allocated to you. because of this feature, it has allow more instances to be hosted. In other words, it is cheaper per instance on OpenVZ.

These explained the difference between OpenVZ and KVM virtualization Hope that the article helped you to make a better choice in choosing a VPS.

OpenVZ vs KVM

OpenVZ is an OS-level virtualization technology. This means the host OS is partitioned into compartments/containers with resources assigned to each instance nested within.

In OpenVZ there are two types of resources, dedicated and burst. A dedicated resource is one where the VPS is guaranteed to get such if requested; these are “yours”. Burst resources come from the remaining unused capacity of the system. The system may allow one VPS to borrow resources like RAM from another VPS when the second one is not using them. Since it is borrowing, such resources have to be returned as soon as possible. Should the other VPS want their dedicated resources back, your processes might become unstable or terminated.

Since OpenVZ is an OS level virtualization, It consumes far less resources per VPS container than a full virtual environment. On two hosts with identical hardware and subscription rates, OpenVZ should perform better than KVM because it doesn’t do full emulation. For example, it doesn’t need to run multiple full OS kernels, as it can share the single kernel between multiple VPSes. The result is significant memory and CPU savings. In fact, most of the kernel memory usage is not charged to the VPS at all, instead it is only charged what each particular VPS needs in addition to the main kernel.

KVM is a hardware virtualization technology. This means the main OS on the server simulates hardware for another OS to run on top of it. It also acts as a hypervisor, managing and fairly distributing the shared resources like disk and network IO and CPU time.

KVM does not have burst resources; they are all dedicated or shared. This means resources like RAM and diskspace are usually much harder to overcommit without endangering all user data. The downside with KVM is that if the limits are hit, the VPS must either swap, incurring a major performance penalty, or start killing its processes. Unlike OpenVZ, KVM VPSes cannot get a temporary reprieve by borrowing from it’s peers as their dedicated resources are completely isolated.

Because KVM simulates hardware, you can run whatever kernel you like on it (within limits). This means KVM is not limited to the Linux kernel that is installed in the root node. KVM can also run other x86 operating systems like BSD and Microsoft Windows. Having a fully independent kernel means the VPS can make kernel modifications or load its own modules. This may be important because there are some more obscure features that OpenVZ does not support.

Why some go for shared web hosting?

Some might ask, why are they going for shared web hosting? While VPS at vastspace is so cheap and you get dedicated IP and services. Unlike the shared web hosting, everything is shared and you are paying extra for a dedicated IP. There are many reason why shared web hosting is better than VPS.

  • You don’t need a dedicated IP.
  • Less budget.
  • Cannot estimate the incoming traffic.
  • Managed and updated by web hosting provider.
  • Do not need the knowledge to operate a VPS.
  • You are not sending emails, shared IP may get black listed easily.
  • You want to use shared resources, so you can stretch when you need more for less populated and powerful server. May not the case while many web hosting provider can limit your resources especially in Linux.

What is DNSSEC ? And why it is important.

To reach someone else on the web you must type a target into your computer – a real name or lots. That target needs to be unique so computers understand finding each other. ICANN coordinates these identifiers which are unique the entire world. Without that coordination we do not one global online. Whenever typing a genuine title, that name must be first translated into lots by something ahead of the connection is founded. That system is named the Domain title System (DNS) plus it translates names like www.vastspace.net In to the real numbers– called Internet Protocol (IP) details. ICANN coordinates the machine that is handling ensure all of the addresses are unique.

Recently vulnerabilities within the DNS were discovered that allow an attacker to hijack this procedure of looking some body up or looking a niche site up on the world wide web employing their title. The objective of the attack would be to take control of the session to, for instance, deliver the user to your hijacker’s own internet that is misleading for account and password collection.

These vulnerabilities have increased interest in introducing a technology called DNS Security Extensions (DNSSEC) to secure this part of the Internet’s infrastructure.

The questions and answers that follow are an attempt to explain what DNSSEC is and why its implementation is important.

1) First, what is the root zone?

The DNS translates domain names that humans can remember into the numbers used by computers to look up its destination (a little like a phone book is used to look-up a phone number). It does this in stages. The first place it ‘looks’ is the top level of the directory service – or “root zone”. So to use www.google.com as an example, your computer ‘asks’ the root zone directory (or top level) where to find information on “.com”. After it gets a response it then asks the “.com” directory service identified by the root where to find information on .google.com (the second level), and finally asking the google.com directory service identified by “.com” what the address for www.google.com is (the third level). After that process – which is almost instantaneous – the full address is provided to your computer. Different entitiesi manage each one of these directory services: google.com by Google, “.com” by VeriSign Corporation (other top level domains are managed by other organizations), and the root zone by ICANN.

2) Why do we need to “sign the root”?

Recently discovered vulnerabilities in the DNS combined with technological advances have greatly reduced the time it takes an attacker to hijack any step of the DNS lookup process and thereby take over control of a session to, for example, direct users to their own deceptive Web sites for account and password collection. The only long-term solution to this vulnerability
is the end-to-end-deployment of a security protocol called DNS Security Extensions – or DNSSEC.

3) What is DNSSEC?

DNSSEC is a technology that was developed to, among other things, protect against such attacks by digitally ‘signing’ data so you can be assured it is valid. However, in order to eliminate the vulnerability from the Internet, it must be deployed at each step in the lookup from root zone to final domain name (e.g., www.icann.org). Signing the root (deploying DNSSEC on the root zone) is a necessary step in this overall processii. Importantly it does not encrypt data. It just attests to the validity of the address of the site you visit.

4) What’s to stop all the other parts of the addressing chain from employing DNSSEC?

Nothing. But like any chain that relies on another part for its strength, if you leave the root zone unsigned you will have a crucial weakness. Some parts could be trusted and others might not be.

5) How will it improve security for the average user?

Full deployment of DNSSEC will ensure the end user is connecting to the actual web site or other service corresponding to a particular domain name. Although this will not solve all the security problems of the Internet, it does protect a critical piece of it – the directory lookup – complementing other technologies such as SSL (https:) that protect the “conversation”, and provide a platform for yet to be developed security improvements.

6) What actually happens when you sign the root?

“Signing the root” by using DNSSEC adds a few more records per top level domain to the root zone file. What are added are a key and a signature attesting to the validity of that key.

DNSSEC provides a validation path for records. It does not encrypt or change the management of data and is ‘backward compatible’ with the current DNS and applications. That means it doesn’t change the existing protocols upon which the Internet’s addressing system is based. It incorporates a chain of digital signatures into the DNS hierarchy with each level owning its own signature generating keys. This means that for a domain name like www.icann.org each organization along the way must sign the key of the one below it. For example, .org signs icann.org’s key, and the root signs .org’s key. During validation, DNSSEC follows this chain of trust up to the root automatically validating “child” keys with “parent” keys along the way. Since every key can be validated by the one above it, the only key needed to validate the whole domain name would be the top most parent or root key.

This hierarchy does mean however that, even with the root signed, full deployment of DNSSEC across all domain names will be a process that will take time since every domain below must also be signed by their respective operators to complete a particular chain of trust. Signing the root is just a start. But it is crucial. Recently TLD operators have accelerated their efforts to deploy DNSSEC on their zones (.se, .bg, .br, .cz, .pr do now with .gov, .uk, .ca and others coming) and others expect to as welliii.

7) How is the root zone file managed?

Management of the root is shared between four entities:

i) ICANN, an International not-for-profit Corporation under contract from United States Department of Commerce, performs the “IANA” function. IANA stand for Internet Assigned Numbers Authority. ICANN receives and vets information from the top level domain (TLD) operators (e.g. “com”);

ii) National Telecommunications and Information Administration (NTIA) – which is an office within the United States Department of Commerce – authorizes changes to the root;

iii) VeriSign a United States based for profit company is contracted by the US Government to edit the root zone with the changed information supplied and authenticated by ICANN and authorized by the Department of Commerce and distributes the root zone file containing information on where to find info on TLDs (e.g. “com”); and

iv) An international group of root server operators that voluntarily run and own more than 200 servers around the world that distribute root information from the root zone file across the Internet. Designated by letter, the operators of the root servers are:

A) VeriSign Global Registry Services;
B) Information Sciences Institute at USC;
C) Cogent Communications;
D) University of Maryland;
E) NASA Ames Research Center;
F) Internet Systems Consortium Inc.;
G) U.S. DOD Network Information Center;
H) U.S. Army Research Lab;
I) Autonomica/NORDUnet, Sweden;
J) VeriSign Global Registry Services;
K) RIPE NCC, Netherlands;
L) ICANN;
M) WIDE Project, Japan.

Ref: http://www.root-servers.org

8) Why is it important for DNSSEC security that the vetting, the editing and the signing by one organization?

For DNSSEC the strength of each link in the chain of trust is based on the trust the user has in the organization vetting key and other DNS information for that link. In order to guarantee the integrity of this information and preserve this trust once the data has been authenticatediv it must be immediately protected from errors, whether malicious or accidental, which can be introduced any time key data is exchanged across organizational boundaries. Having a single organization and system directly incorporate the authenticated material into the signed zone maintains that trust through to publication. It is simply more secure.

With the increased confidence in the security of DNS that DNSSEC will bring, it becomes ever more important that the trust achieved from ICANN‘s validation and authentication of TLD trust anchor material be maintained through to a signed root zone file.

9) In DNSSEC, what are the KSK and ZSK?

KSK stands for Key Signing key (a long term key) and ZSK stands for Zone Signing Key (a short term key). Given sufficient time and data, cryptographic keys can eventually be compromised. In the case of the asymmetric or public key cryptography used in DNSSECv, this means an attacker determines, through brute force or other methods, the private half of the public-private key pair used to create the signatures attesting to the validity of DNS records. This allows him to defeat the protections afforded by DNSSEC. DNSSEC thwarts these compromise attempts by using a short term key – the zone signing key (ZSK) – to routinely compute signatures for the DNS records and a long term key – the key signing key (KSK) – to compute a signature on the ZSK to allow it to be validated. The ZSK is changed or rolled over frequently to make it difficult for the attacker to “guess” while the longer KSK is changed over a much longer time period (current best practices place this on the order of a year). Since the KSK signs the ZSK and the ZSK signs the DNS records, only the KSK is required to validate a DNS record in the zone. It is a sample of the KSK, in the form of a Delegation Signer (DS) record that is passed up to the “parent” zone. The parent zone (e.g. the root) signs the DS record of the child (e.g., .org) with their own ZSK that is signed by their own KSK.

This means that if DNSSEC is fully adopted the KSK for the root zone would be part of the validation chain for every DNSSEC validated domain name (or yet to be developed application).

10) Who manages the keys?

Under this proposal, ICANN would keep the key infrastructure but the credentials to actually generate the KSK would be held by outside parties. This is an important element to the overall global acceptance of this process. ICANN did not propose a specific solution to which entities should hold the credentials, and believes this, like all of these issues are ones for public consultation and for decision by the United States Department of Commerce.


 

Virtual Private Server, do you need one?

A Virtual Private Server (VPS) can be a virtual machine offered just like a service by a web-based hosting service.

A Virtual Private Server runs a unique copy from the operating system, and customers have superuser-level utilisation of that operating-system instance, to enable them to install almost any software that runs using that OS. For several purposes they are functionally comparable to a separate physical server and being software-defined, could be a much more easily created and configured. They are priced reduced than the usual similar physical server, speculate they share the particular physical hardware as well as other VPSs, performance may be lower and may depend around the workload of other instances on one hardware node.

 

The pressure driving server virtualization is comparable to what brought to the introduction of time discussing and multiprogramming previously. Even though the sources continue to be shared, as underneath the time-discussing model, virtualization supplies a greater degree of security, determined by the kind of virtualization used, because the individual virtual servers are mainly isolated from one another and could run their very own full-fledged operating-system which may be individually rebooted like a virtual instance.

Partitioning just one server to look as multiple servers continue to be more and more common on microcomputers because of the launch of VMware ESX Server in 2001. The physical server typically runs a hypervisor that is given the job of creating, releasing, and handling the sources of “guest” os’s or virtual machines. These guest os are allotted a share of sources from the physical server, typically inside a way the guest is unaware of every other physical source save for individuals allotted into it through the hypervisor. Like a Virtual Private Server runs its very own copy of their operating system, customers have superuser-level use of that operating system instance, and may install just about any software that works on the OS, however, because of the number of virtualization clients typically running on one machine, a VPS generally has limited processor time, RAM, and disk space.

Although VMware and Hyper-V dominate in-house corporate virtualization, due to their cost and limitations they’re less frequent for VPS providers, which rather typically use products, for example, OpenVZ, Virtuozzo, Xen or KVM.

A lot of companies offer VPS hosting or virtual dedicated server web hosting being an extension for website hosting services. There are many challenges to think about when licensing proprietary software in multi-tenant virtual environments.

With unmanaged or self-managed hosting, the client remains to manage their own server instance.

Unmetered hosting is usually offered without any limit on the quantity of data-transferred on the fixed bandwidth line. Usually, unmetered hosting is provided with 10 Mbit/s, 100 Mbit/s or 1000 Mbit/s (with a few up to 10Gbit/s). Which means that the client is theoretically able to utilize ~3 TB on 10 Mbit/s or as much as ~300 TB on the 1000 Mbit/s line monthly, although used the is going to be considerably less. Inside a VPS, this is shared bandwidth along with a fair usage policy ought to be involved. Limitless hosting can also be generally marketed but generally restricted to acceptable usage policies and TOS. Offers of limitless disk space and bandwidth will always be false because of cost, carrier capacities and technological limitations.