Whatsapp/ Telegram: 65 97765889 Live Chat Submit Ticket   Login
Is your mail server IP address blacklisted?

Is your mail server IP address blacklisted?

questionThe common tool used by many to check their mail server IP address is blacklisted from MXTOOL. Often, we are hearing from someone his or her mail server is blacklisted. But by who? And how? What will happen if my mail server is blacklisted? How can I resolve? This article will provide you with the information to have a better understanding of the matter.

Firstly, we must understand DNSBL. What is DNSBL? It is referred to as Domain Name System Blacklists, also called DNSBL’s or DNS Blacklists, they are spam blocking lists that allow a mail server administrator to block messages from particular mail servers which have a brief history of sending spam. The lists derive from the Internet’s Domain Name Program, which converts difficult, numerical Ip such as for example 123.123.123.123 into names of a domain like example.net, building the lists much simpler to read, use, and search. If the maintainer of a DNS Blacklist offers previously received spam of any sort from a specific domain name, that server will be “blacklisted” and all communications sent from it might be either flagged or rejected from all sites that make use of that specific list. DNS Blacklists have got a fairly long history in internet terms, with the initial one getting created in 1997. Known as the RBL, its purpose was to block spam email and also to educate Internet providers and various other websites about spam and its own related problems. Although contemporary DNS Blacklists are hardly ever used as educational equipment, their function as a contact blocker and filtration system still serves as their main purpose even today. In fact, the vast majority of today’s email servers support at least one DNSBL in order to reduce the quantity of junk mail customers using their support receive. The three fundamental components that define a DNS Blacklist – a domain name to web host it under, a server to host that domain, and a listing of addresses to create to the list – also haven’t changed from enough time when the RBL was initially created to today. Since then, a large number of different DNSBL’s have sprung up and so are available for use, plus they all have their own lists that are populated predicated on what does or doesn’t meet up with their own requirements and criteria for what a spammer is. Due to this, DNS Blacklists may differ greatly from one to the other. Some are stricter than others, some just list sites for an arranged period of time from the day the last little bit of spam was received by the maintainer versus others that are manually managed, and still, others not only block IP addresses but also whole ISP’s recognized to harbor spammers. This outcome in a few lists working much better than others because they’re maintained by providers with a greater degree of trustworthiness and credibility than competing lists may have. Users may also use these variations to select which DNS Blacklist is most effective for them based on what their particular security needs are. Much less lenient lists might enable more spam to complete but may not block non-spam text messages that have been misidentified on lists which have stricter recommendations for how are you affected or what’s left of it. To greatly help facilitate this, DNS Blacklists that are designed for use by the general public will generally have a particular, published policy detailing just what a listing means and must abide by the criteria organized in it to be able to not merely attain public confidence within their services but to maintain it as well.

Now, we have understood what is DNSBL. The commonly used list is from spamcop, spamhus, barracuda etc. They are maintaining an almost real-time updated list to most mail server administrators to block spam emails. This is a common and popular method. As soon as your mail server is blacklisted and listed. Emails are originated from the blacklisted mail servers are bounced until they are delisted.

What is a WAF?

What is a WAF?

Have you ever wondered what WAF means? (extracted from Sucuri Website)

WAF stands for Website Application Firewall. In order to make it simple to understand, imagine your website as a house and the people outside on the streets are the traffic that wants to come to your website.  Of course, you want to open your door to friends and family, but you also want to protect your house from the bad guys. That is exactly what the firewall does. The WAF is the locked house door. A WAF keeps the malicious traffic off your website. In other words, a WAF is a layer of protection that sits between your website and the traffic it receives.

Why do you need a WAF?

The same way that there are criminals on the streets, there are hackers online. Threats to websites emerge and evolve every day; keeping up with the hacking trends can be very stressful to any webmaster.

Network and local firewalls alone cannot stop hackers from breaking into your website anymore. Many of these solutions are not effective when it comes to stopping malicious online traffic.

Having an effective Web Application Firewall (WAF) provides companies and website owners peace of mind.

Expecting the hosts to take care of your website security can be misleading, as their main goal is to ensure the accessibility of your website. Some hosts, like GoDaddy, do offer website security. Nevertheless, you need to make sure to implement a security solution, like the Sucuri Platform to protect your website for you.

Another important aspect of having a Website Application Firewall on your website is the time it will save you in the long run. After setting up a WAF properly on your website, you would no longer be spending precious time thinking about ways to protect it. Then, if your website was, in fact, hacked, how many hours would you waste trying to find the issue and fix it? I am not even mentioning the amount of money potentially lost from having an unprotected website.

How does a WAF work?

The WAF works as a vaccine for a website. It is a preventive measure taken so your website does not get infected or goes offline. Nobody really likes to be vaccinated, but the cost of getting sick is always a thousand times higher. Having a WAF activated means having a proactive posture on your website security.

You already know that having a website firewall solution is vital to protecting any website. Next, let’s dive deeper into the characteristics of WAFs.

WAF Features

Application firewalls go beyond the metadata of the packets transferred at the network level. They focus on the data in transfer. Application firewalls were created to understand the type of data allowed for each protocol, like SMTP and HTTP. There are specific application firewalls for websites and they are called Website Application Firewalls (WAF).

Application Firewalls

In general, all WAF solutions function the same way. They are basically a wall between your website application and the visitor browsing your website. A WAFs main goal is to impede malicious requests from damaging your website.

The difference among the many website firewall solutions in the market is mainly how they are deployed and their database. The Sucuri WAF is the most advanced in terms of virtual patching. We take research very seriously. Our firewall analysts work hard day and night so we can provide you the most complete and robust solution in the market. Our WAF filters block up to 100% of the attacks your website can suffer from.

Now that you know what a WAF is, let’s talk about the Sucuri WAF.

Sucuri Firewall

Sucuri is a website security company that was born to offer website owners a comprehensive security solution. The Sucuri Firewall is a cloud-based software as a service (SaaS) Website Application Firewall (WAF) and Intrusion Prevention System (IPS) developed exclusively for websites.

What is great about the Sucuri Firewall is that it functions as a reverse proxy. The Sucuri WAF intercepts and inspects all incoming Hypertext Transfer Protocol/Secure (HTTP/HTTPS) requests to a website. Then the WAF strips the malicious requests at the Sucuri network edge before it arrives at your server.

 

Another feature that the Sucuri Firewall offers is that its WAF includes Virtual Patching and Virtual Hardening engines. The Sucuri Firewall mitigates threats as they happen.

The Sucuri WAF keeps the threats far from your website without impacting your website negatively. Quite the opposite, the Sucuri website firewall makes a website up to 70% faster, as it is built on a Content Distribution Network (CDN).

How the Sucuri Firewall Works

Performance optimization is part of the Sucuri WAF features. The CDN caches dynamic and static content across all nodes in the network to ensure optimal performance around the world. The Sucuri WAF configuration makes adequate preparation for global reach, load balancing, failover, and comprehensive performance improvement.

Website up to 70% faster with Sucuri Firewall

The Sucuri WAF runs on a proprietary Globally Distributed Anycast Network (GDAN). Anycast allows a network to broadcast an IP to multiple locations from a single node, permitting the nearest node to respond to a request. Imagine your website has a global audience: the website is hosted on a server in Houston, but your main visitors are in Asia and Western Europe. If you have the Sucuri Firewall activated on your website, the content would be broadcasted from a Tokyo and London Point of Presence (PoP) via our Anycast network. The result would be an improved user experience as visitors in Asia would get a response from the Tokyo PoP, and the ones in Europe from the London PoP. To sum it up, since Sucuri WAF runs on a Global AnyCast Network, the nearest node responds to the requests, bringing improved availability, resiliency, and failover capability to any website.

Anycast Network

This unique configuration allows for high availability and redundancy if anything fails in the network. Moreover, the Sucuri Firewall offers full Domain Name Server (DNS) services.

Another great advantage of using the Sucuri WAF solution is that it can help you increase your SEO rankings. The inclusion of an SSL certificate and improved speed from the Anycast CDN can improve SEO. You might see SEO improvement after the Sucuri WAF is activated because having HTTPS enabled and using a CDN are confirmed ranking signals from Google.

To sum it up, the Sucuri WAF:

  • Mitigates Distributed Denial of Service (DDoS) Attacks
  • Prevents Vulnerability Exploit Attempts, such as SQL injections, cross-site scripting (XSS), remote file inclusion (RFI) and local file inclusion (LFI)
  • Protects Against the OWASP Top 10 (and more)
  • Protects Against Zero-Day Exploits
  • Protects Against Access Control Attacks, such as Brute Force attempts
  • Offers Performance Optimization with its CDN

How can I add the Sucuri WAF to my Website?

In order to add the Sucuri Firewall to your website, all you need to do is add a DNS A record or switch to Sucuri nameservers. The time to go live is dictated by the DNS Time to Live (TTL). In most cases, it takes from 30 to 60 minutes. If you have any issues with the setup, or if you are not technical and need assistance, our support team can guide you through it.

Conclusion

As you have seen, using the Sucuri Website Application Firewall can be very valuable for your website and business. Not only do we offer protection, but also a performance boost and better SEO, which are like gold for any website owner. If you are wondering why you have not added our Firewall to your website yet — don’t worry. Chat with us and we will help you have your website protected today

How to protect your website?

How to protect your website?

 
A website is using WordPress, Joomla, Drupal is common. There is a huge collection of plug-ins, modules, and components. Most are free can download from the internet.
 
Because open source applications are free, they are a very popular choice example a WordPress website. 6 out of 10 websites are using WordPress. The installation script is available on the most popular hosting panel. A few click away, WordPress website will be ready for you
 
But do you know these websites are hackable? The vulnerabilities are in these open source CMS. Because the code on the CMS is readable by anyone. The bad guys will find its loopholes and exploit them.
 
So it is common to hear from someone, he or she has a hacked WordPress website. Can we protect it? Do we need to install a costly appliance?  In the past, engineers installed expensive equipment for combat web intrusion. Never think that the web protection existed with your web hosting which is not the case. In this modern world, cloud web protection is available at an affordable price.
 
There are 2 similar website protection services can do the job., Cloudflare, and Sucuri. Both are available at Vastspace. They have the same goal to filter any known or even suspicious malicious activities. Starts with as little as USD 20, you have CDN to speed up connections to your website and protects them at the same time. Not limited to DDOS attacks itself.
 
Cloudflare has more POPs than Sucuri. The connection to your website from many places is faster. But this is numbers on paper. Many cases, you cannot tell the difference because they are in milliseconds. I have tried both, they offered protection but I like Sucuri more.
 
sucuri cpsucuri scanI have a trial service on the Sucuri Website Firewall PRO and monitoring. From the control panel, you get to see the website health status after you have logged in. The information is something you will not have in Cloudflare. They provide you with an overview of the website health. Spamhaus status is good,  can use as a reference on your mail server RBL if they hosted together. Also, you can adjust the scan interval as low as 6 hours on the scan or scan daily as a routine.
 
sucuri advanceAt the website firewall, you will get an overview of allowed and blocked traffic. More useful options like access control,  security, performance, and SSL on the settings. For Cloudflare, I’m overwhelmed with the features. Most layman will want to pay you to solve their problem. After the initial setup, they hardly log in to tweak the settings. So, I felt that some of this settings might be too much for them to digest. On Sucuri, most essential settings are available. Except that you might want to have a closer look at advance security option and protected pages at access control. These are good options if you have a WordPress or Joomla website if you want to protect sensitive URL.
 
But, there is a con on both setups. If they are not setup correctly, attackers can bypass this firewall. Eventually, your website is not protected. So, make sure you talk to a certified engineer.
Also, like your FTP, email service, webmail, and control panel can ruin too. Make sure you check these service and ask if there is any workaround.
Feel free to write to [email protected] if you have questions about the 2 services. As their partner, we are glad to assist you.

What is BRTFS?

BRTFS is a file this is certainly contemporary that began development back 2007. It had been merged into the mainline Linux kernel in the beginning of 2009 and debuted in the Linux 2.6.29 release. BRTFS is GPL-licensed but currently considered volatile. Hence, Linux distributions have a tendency to deliver with BRTFS being an alternative although not while the standard.

BRTFS isn’t a successor to the default Ext4 file system utilized in most Linux distributions, nonetheless it to expect to displace Ext4 later on. A maintainer for Ext3 and later, Ext4, features stated he sees BRTFS like a better method ahead than continuing to count on technology this is certainly ext.

BRTFS is expected to offer better reliability and scalability. It’s a file that is copy-on-write meant to address various weaknesses in current Linux file systems. Primary focus things include fault threshold, fix, and management this is certainly easy.

What is DKIM?

Known as DomainKeys Identified Mail is for identification on email designed to prevent email spoofing. It allows the email server to check that the incoming email claimed to have come from a specific domain was authorized by the owner of that sending domain. It is intended to prevent forged sender addresses in emails, a technique often used in phishing and email spam.

In technical terms, DKIM lets a domain name associate its name with an email message by affixing a digital signature to it. Verification is carried out using the signer’s public key published in the DNS. A valid signature guarantees that some parts of the email (possibly including attachments) have not been modified since the signature was affixed. Usually, DKIM signatures are not visible to end-users, and are affixed or verified by the infrastructure rather than message’s authors and recipients. In that respect, DKIM differs from end-to-end digital signatures. Our virtuemail has the option for DKIM.  You can contact us o understand more.

Email delivery?

Email delivery?

Someone is

mail delivery

A mailbox full of mail against a blue and puffy white cloud sky

telling me just type the recipient email address to send an email. Sound familiar to you? He or she is not wrong. To send an email you first must know his or her email address but did anyone tell you how an email is delivered and why he or she did not receive my email.

Let me explain how an email is delivered. Most people are sending an email knowing only recipient email address. But sending email are more than just are knowing the other party email address. The most crucial thing for sending email is actually the DNS. without DNS, the mail server is handicapped and does not know where and how to deliver your email. DNS consists of records, it is like a directory to tell you where you are hosting the mail server and where to find you. With the DNS, your email is delivered to the destinate mail server and mailbox eventually.

Sometimes your recipient will claim email was not delivered to him or her. How did this happen? There are a few common reasons why an email was not delivered.

  1. Mistyped email address, this common mistake that people made, an email address is mistyped and sent wrongly and was not sent.
  2. Email has gone to junk box and not knowing the email did arrive but was never in the inbox.
  3. Some mail server allows users to filter their emails because of spam emails. Sometimes keyword related filtering could have wrongly filtered your email.
  4. Your mail server IP is blacklisted by the popular DNSBL. It was not you but someone account is compromised and sending spam emails. This can cause your mail server IP to be blacklisted. At such situation, your email will be treated as spam email and bounced or filtered.
  5. Email is showing delivered as sent. This can be due to a bug in the mail client and it is not delivered to the mail server. This can happen to an outdated application.
  6. It is rare but it happened the email is caught in the sender or recipient mail server mail queue due to many reasons. When the email is caught in the queue it will not deliver to the user mailbox.
Which type of VPS?

Which type of VPS?

kvm vps

kvm vps

Many have asked which type of VPS to choose and what are the difference? The propose of this post is to help customer to make the right choice. It is not because Vastspace have dropped OpenVZ and we start praising KVM. Vastspace has dropped OpenVZ is base on demand and it is difficult to manage 2 types of virtualization at the same time., Finally we have chosen KVM instead.

We are not saying OpenVZ is bad. Honestly, there are many advantages for hosting vendors like us. Hosting providers can put more instances on one node compares to KVM. As you know OpenVZ is sharing files and kernel. Theoretically, you are using less space or space is available to you dynamically. Even you are given 40 Gb of space but OpenVZ is calculating the space only you are consuming and not allocated.

Because OpenVZ is sharing the node kernel, you cannot reboot the virtualized instance at your own. In other words, you cannot update a kernel bug or security fix. This has to be waited till the virtualization distribution released and schedule to be updated as a whole but not individually.

OpenVZ allows the use of memory does not belong to you or has been allocated to you. There is mean to say, if you are allocated with 1Gb of RAM you might be able to use more. If you look at other angle, you are stealing others RAM. What happen if someone is stealing from you? This will only happen to unused RAM. In many occasions, RAM is taken from the node and the entire server freezes because of overage. All the VPS hosted within the server are affected due to poor management and this will not happen to KVM. However, there are much improvements in virtualization, there is such thing know as burstable memory, this can be done on KVM VPS.

There are some applications required non-shared kernel. For example, a real-time anti-virus which is essential for today cyber world, can only install on a KVM VPS and not OpenVZ VPS due to its shared kernel.

In real world, CPU are shared in OpenVZ . CPU are dynamically shared among the client machine. You can say it is burstable unlike KVM, you only can use what is allocated to you. because of this feature, it has allow more instances to be hosted. In other words, it is cheaper per instance on OpenVZ.

These explained the difference between OpenVZ and KVM virtualization Hope that the article helped you to make a better choice in choosing a VPS.

OpenVZ vs KVM

OpenVZ is an OS-level virtualization technology. This means the host OS is partitioned into compartments/containers with resources assigned to each instance nested within.

In OpenVZ there are two types of resources, dedicated and burst. A dedicated resource is one where the VPS is guaranteed to get such if requested; these are “yours”. Burst resources come from the remaining unused capacity of the system. The system may allow one VPS to borrow resources like RAM from another VPS when the second one is not using them. Since it is borrowing, such resources have to be returned as soon as possible. Should the other VPS want their dedicated resources back, your processes might become unstable or terminated.

Since OpenVZ is an OS level virtualization, It consumes far less resources per VPS container than a full virtual environment. On two hosts with identical hardware and subscription rates, OpenVZ should perform better than KVM because it doesn’t do full emulation. For example, it doesn’t need to run multiple full OS kernels, as it can share the single kernel between multiple VPSes. The result is significant memory and CPU savings. In fact, most of the kernel memory usage is not charged to the VPS at all, instead it is only charged what each particular VPS needs in addition to the main kernel.

KVM is a hardware virtualization technology. This means the main OS on the server simulates hardware for another OS to run on top of it. It also acts as a hypervisor, managing and fairly distributing the shared resources like disk and network IO and CPU time.

KVM does not have burst resources; they are all dedicated or shared. This means resources like RAM and diskspace are usually much harder to overcommit without endangering all user data. The downside with KVM is that if the limits are hit, the VPS must either swap, incurring a major performance penalty, or start killing its processes. Unlike OpenVZ, KVM VPSes cannot get a temporary reprieve by borrowing from it’s peers as their dedicated resources are completely isolated.

Because KVM simulates hardware, you can run whatever kernel you like on it (within limits). This means KVM is not limited to the Linux kernel that is installed in the root node. KVM can also run other x86 operating systems like BSD and Microsoft Windows. Having a fully independent kernel means the VPS can make kernel modifications or load its own modules. This may be important because there are some more obscure features that OpenVZ does not support.

What is DNSSEC ? And why it is important.

To reach someone else on the web you must type a target into your computer – a real name or lots. That target needs to be unique so computers understand finding each other. ICANN coordinates these identifiers which are unique the entire world. Without that coordination we do not one global online. Whenever typing a genuine title, that name must be first translated into lots by something ahead of the connection is founded. That system is named the Domain title System (DNS) plus it translates names like www.vastspace.net In to the real numbers– called Internet Protocol (IP) details. ICANN coordinates the machine that is handling ensure all of the addresses are unique.

Recently vulnerabilities within the DNS were discovered that allow an attacker to hijack this procedure of looking some body up or looking a niche site up on the world wide web employing their title. The objective of the attack would be to take control of the session to, for instance, deliver the user to your hijacker’s own internet that is misleading for account and password collection.

These vulnerabilities have increased interest in introducing a technology called DNS Security Extensions (DNSSEC) to secure this part of the Internet’s infrastructure.

The questions and answers that follow are an attempt to explain what DNSSEC is and why its implementation is important.

1) First, what is the root zone?

The DNS translates domain names that humans can remember into the numbers used by computers to look up its destination (a little like a phone book is used to look-up a phone number). It does this in stages. The first place it ‘looks’ is the top level of the directory service – or “root zone”. So to use www.google.com as an example, your computer ‘asks’ the root zone directory (or top level) where to find information on “.com”. After it gets a response it then asks the “.com” directory service identified by the root where to find information on .google.com (the second level), and finally asking the google.com directory service identified by “.com” what the address for www.google.com is (the third level). After that process – which is almost instantaneous – the full address is provided to your computer. Different entitiesi manage each one of these directory services: google.com by Google, “.com” by VeriSign Corporation (other top level domains are managed by other organizations), and the root zone by ICANN.

2) Why do we need to “sign the root”?

Recently discovered vulnerabilities in the DNS combined with technological advances have greatly reduced the time it takes an attacker to hijack any step of the DNS lookup process and thereby take over control of a session to, for example, direct users to their own deceptive Web sites for account and password collection. The only long-term solution to this vulnerability
is the end-to-end-deployment of a security protocol called DNS Security Extensions – or DNSSEC.

3) What is DNSSEC?

DNSSEC is a technology that was developed to, among other things, protect against such attacks by digitally ‘signing’ data so you can be assured it is valid. However, in order to eliminate the vulnerability from the Internet, it must be deployed at each step in the lookup from root zone to final domain name (e.g., www.icann.org). Signing the root (deploying DNSSEC on the root zone) is a necessary step in this overall processii. Importantly it does not encrypt data. It just attests to the validity of the address of the site you visit.

4) What’s to stop all the other parts of the addressing chain from employing DNSSEC?

Nothing. But like any chain that relies on another part for its strength, if you leave the root zone unsigned you will have a crucial weakness. Some parts could be trusted and others might not be.

5) How will it improve security for the average user?

Full deployment of DNSSEC will ensure the end user is connecting to the actual web site or other service corresponding to a particular domain name. Although this will not solve all the security problems of the Internet, it does protect a critical piece of it – the directory lookup – complementing other technologies such as SSL (https:) that protect the “conversation”, and provide a platform for yet to be developed security improvements.

6) What actually happens when you sign the root?

“Signing the root” by using DNSSEC adds a few more records per top level domain to the root zone file. What are added are a key and a signature attesting to the validity of that key.

DNSSEC provides a validation path for records. It does not encrypt or change the management of data and is ‘backward compatible’ with the current DNS and applications. That means it doesn’t change the existing protocols upon which the Internet’s addressing system is based. It incorporates a chain of digital signatures into the DNS hierarchy with each level owning its own signature generating keys. This means that for a domain name like www.icann.org each organization along the way must sign the key of the one below it. For example, .org signs icann.org’s key, and the root signs .org’s key. During validation, DNSSEC follows this chain of trust up to the root automatically validating “child” keys with “parent” keys along the way. Since every key can be validated by the one above it, the only key needed to validate the whole domain name would be the top most parent or root key.

This hierarchy does mean however that, even with the root signed, full deployment of DNSSEC across all domain names will be a process that will take time since every domain below must also be signed by their respective operators to complete a particular chain of trust. Signing the root is just a start. But it is crucial. Recently TLD operators have accelerated their efforts to deploy DNSSEC on their zones (.se, .bg, .br, .cz, .pr do now with .gov, .uk, .ca and others coming) and others expect to as welliii.

7) How is the root zone file managed?

Management of the root is shared between four entities:

i) ICANN, an International not-for-profit Corporation under contract from United States Department of Commerce, performs the “IANA” function. IANA stand for Internet Assigned Numbers Authority. ICANN receives and vets information from the top level domain (TLD) operators (e.g. “com”);

ii) National Telecommunications and Information Administration (NTIA) – which is an office within the United States Department of Commerce – authorizes changes to the root;

iii) VeriSign a United States based for profit company is contracted by the US Government to edit the root zone with the changed information supplied and authenticated by ICANN and authorized by the Department of Commerce and distributes the root zone file containing information on where to find info on TLDs (e.g. “com”); and

iv) An international group of root server operators that voluntarily run and own more than 200 servers around the world that distribute root information from the root zone file across the Internet. Designated by letter, the operators of the root servers are:

A) VeriSign Global Registry Services;
B) Information Sciences Institute at USC;
C) Cogent Communications;
D) University of Maryland;
E) NASA Ames Research Center;
F) Internet Systems Consortium Inc.;
G) U.S. DOD Network Information Center;
H) U.S. Army Research Lab;
I) Autonomica/NORDUnet, Sweden;
J) VeriSign Global Registry Services;
K) RIPE NCC, Netherlands;
L) ICANN;
M) WIDE Project, Japan.

Ref: http://www.root-servers.org

8) Why is it important for DNSSEC security that the vetting, the editing and the signing by one organization?

For DNSSEC the strength of each link in the chain of trust is based on the trust the user has in the organization vetting key and other DNS information for that link. In order to guarantee the integrity of this information and preserve this trust once the data has been authenticatediv it must be immediately protected from errors, whether malicious or accidental, which can be introduced any time key data is exchanged across organizational boundaries. Having a single organization and system directly incorporate the authenticated material into the signed zone maintains that trust through to publication. It is simply more secure.

With the increased confidence in the security of DNS that DNSSEC will bring, it becomes ever more important that the trust achieved from ICANN‘s validation and authentication of TLD trust anchor material be maintained through to a signed root zone file.

9) In DNSSEC, what are the KSK and ZSK?

KSK stands for Key Signing key (a long term key) and ZSK stands for Zone Signing Key (a short term key). Given sufficient time and data, cryptographic keys can eventually be compromised. In the case of the asymmetric or public key cryptography used in DNSSECv, this means an attacker determines, through brute force or other methods, the private half of the public-private key pair used to create the signatures attesting to the validity of DNS records. This allows him to defeat the protections afforded by DNSSEC. DNSSEC thwarts these compromise attempts by using a short term key – the zone signing key (ZSK) – to routinely compute signatures for the DNS records and a long term key – the key signing key (KSK) – to compute a signature on the ZSK to allow it to be validated. The ZSK is changed or rolled over frequently to make it difficult for the attacker to “guess” while the longer KSK is changed over a much longer time period (current best practices place this on the order of a year). Since the KSK signs the ZSK and the ZSK signs the DNS records, only the KSK is required to validate a DNS record in the zone. It is a sample of the KSK, in the form of a Delegation Signer (DS) record that is passed up to the “parent” zone. The parent zone (e.g. the root) signs the DS record of the child (e.g., .org) with their own ZSK that is signed by their own KSK.

This means that if DNSSEC is fully adopted the KSK for the root zone would be part of the validation chain for every DNSSEC validated domain name (or yet to be developed application).

10) Who manages the keys?

Under this proposal, ICANN would keep the key infrastructure but the credentials to actually generate the KSK would be held by outside parties. This is an important element to the overall global acceptance of this process. ICANN did not propose a specific solution to which entities should hold the credentials, and believes this, like all of these issues are ones for public consultation and for decision by the United States Department of Commerce.


 

Virtual Private Server, do you need one?

A Virtual Private Server (VPS) can be a virtual machine offered just like a service by a web-based hosting service.

A Virtual Private Server runs a unique copy from the operating-system, and customers have superuser-level utilisation of that operating-system instance, to enable them to install almost any software that runs using that OS. For several purposes they are functionally comparable to a separate physical server, and being software-defined, could be a much more easily created and configured. They are priced reduced than the usual similar physical server, speculate they share the particular physical hardware as well as other VPSs, performance may be lower, and may depend around the workload of other instances on one hardware node.

 

The pressure driving server virtualization is comparable to what brought to the introduction of time-discussing and multiprogramming previously. Even though the sources continue to be shared, as underneath the time-discussing model, virtualization supplies a greater degree of security, determined by the kind of virtualization used, because the individual virtual servers are mainly isolated from one another and could run their very own full-fledged operating-system which may be individually rebooted like a virtual instance.

Partitioning just one server to look as multiple servers continues to be more and more common on microcomputers because the launch of VMware ESX Server in 2001. The physical server typically runs a hypervisor that is given the job of creating, releasing, and handling the sources of “guest” os’s, or virtual machines. These guest os’s are allotted a share of sources from the physical server, typically inside a way the guest is unaware of every other physical sources save for individuals allotted into it through the hypervisor. Like a Victual Pirvate Server runs its very own copy of their operating-system, customers have superuser-level use of that operating-system instance, and may install just about any software that works on the OS however, because of the quantity of virtualization clients typically running on one machine, a VPS generally has limited processor time, RAM, and disk space.

Although VMware and Hyper-V dominate in-house corporate virtualization, due to their cost and limitations they’re less frequent for VPS providers, which rather typically use products for example OpenVZ, Virtuozzo, Xen or KVM.

A lot of companies offer vps hosting or virtual dedicated server web hosting being an extension for website hosting services. There are many challenges to think about when licensing proprietary software in multi-tenant virtual environments.

With unmanaged or self-managed hosting, the client remains to manage their own server instance.

Unmetered hosting is usually offered without any limit on the quantity of data-transferred on the fixed bandwidth line. Usually, unmetered hosting is provided with 10 Mbit/s, 100 Mbit/s or 1000 Mbit/s (with a few up to 10Gbit/s). Which means that the client is theoretically able to utilize ~3 TB on 10 Mbit/s or as much as ~300 TB on the 1000 Mbit/s line monthly, although used the is going to be considerably less. Inside a vps, this is shared bandwidth along with a fair usage policy ought to be involved. Limitless hosting can also be generally marketed but generally restricted to acceptable usage policies and tos. Offers of limitless disk space and bandwidth will always be false because of cost, carrier capacities and technological limitations.