Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Developers
  3. Secondary DNS Explained

Secondary DNS Explained

  • By Gcore
  • July 10, 2023
  • 8 min read
Secondary DNS Explained

Your domains are the entry point to your online services, so their reliability and performance are vital for success. With secondary DNS, you can add redundancy to your name servers while minimizing domain resolution latency. Whether you’re a product manager wanting to refine the first stages of a sales funnel or an engineer needing to achieve a service level objective, secondary DNS can help you to achieve your business goals. This article will explain what secondary DNS is, how it fits into the DNS design, and what benefits you get from adding it to your architecture.

What Is Secondary DNS?

A secondary DNS server is a type of DNS server that automatically stores copies of all the DNS records from a primary DNS server. If your primary server cannot be reached or is busy, the secondary DNS server steps in to handle requests. This adds redundancy and ensures the continuous availability of your DNS services, safeguarding against potential disruptions in network services.

How Does DNS Work?

To better understand secondary DNS, let’s make a quick excursion to the basics of DNS. This part is necessary to understand where secondary name servers fit into the overall picture.

DNS is short for domain name service. DNS is a distributed system that resolves domain names into IP addresses, other domain names, or arbitrary text. It adds a layer of indirection between the actual addresses of each server on the Internet and the clients that want to access them. It’s an essential part of the internet. Without it, you would have to remember long numbers like 192.168.43.10 or 2001:db8::ff00:42:8329 and update them everywhere when they change.

How Does Name Resolution Work?

The name resolution happens via simple lookup tables called zone files filled with RRs. Each of these RRs contains the following fields:

  • The name field contains a fully qualified domain name, which forms the key of the zone files.
  • The type field contains the record type. Important types are:
    • SOA for administrative data, e.g., the zone file version
    • A and AAAA for IP addresses
    • CNAME for aliasing other domain names
    • MX for mail server addresses or domain names
    • TXT for arbitrary text
    • NS for addresses or domain names of authoritative name servers; this is also the record you can check for secondary name servers
  • The data field contains data like an IP address, another domain, or text. It forms the value of the zone files.
  • The time to live (TTL) contains the time a client can cache the resolved data locally.
  • The class field contains a protocol class. On the internet, its value is always IN.

Look at the following zone file for a fictional example.com zone with multiple NS records. Explanations are on the left, and the records are on the right and below.

Figure 1: Example zone file with DNS records
$TTL 86400example.com. IN SOA ns1.example.com. hostmaster.example.com. (   2023061901 ; serial number YYYYMMDDnn   3600       ; refresh every 1 hours   1800       ; retry every 30 minutes   604800     ; expire after 1 week   86400 )    ; minimum TTL of 1 dayexample.com. IN NS ns1.example.com. ; could be a primary serverexample.com. IN NS ns2.example.com. ; could be a secondary serverexample.com. IN MX 10 mail.example.com.example.com.      IN A 192.0.2.1ns1.example.com.  IN A 192.0.2.2ns2.example.com.  IN A 192.0.2.3mail.example.com. IN A 192.0.2.4hello.example.com. IN TXT "Hello, world!"

The first line defines the default TTL, which a name server automatically applies to each RR record that doesn’t have its own TTL defined.

The first RR is the SOA record. It’s mandatory and includes administrative information like a name server, the email address (written with a dot instead of an @ symbol) of the responsible domain admin, a version number of the zone file, and timings for caching.

The two NS records define the authoritative name servers for this zone. Here, we have two that also use domains from within the zone. ns1.example.com is the primary name server, ns2.example.com the secondary. This is where you would add your secondary name servers so clients can find them.

The MX record defines the email server for this zone.

The A records then map the domains of the name and email servers to IP addresses, which clients can use to connect to servers. It also includes an apex domain, which maps the bare example.com domain to an IP address.

The final TXT record resolves to the string Hello, world! when queried.

How Do Secondary Servers Relate to Primary (Authoritative) Servers?

Adding the address of a secondary name server to an NS record turns it into an authoritative name server for a zone; it becomes part of the global DNS hierarchy. An authoritative name server is any server with its address mentioned in an NS record for a zone and usually holds all the DNS records for its designated zone. Both primary and secondary servers can be authoritative for a zone.

Adding NS records for secondary servers is crucial because clients only know authoritative servers, they normally don’t know about the concept of primary and secondary name servers. If you add a secondary server to a primary one, but the secondary server’s address isn’t added as an NS record, the clients can’t find it.

Recursive Name Servers

The counterpart of an authoritative name server is a recursive name server, which isn’t responsible for a zone. Recursive servers relay queries to other name servers for resolution and may cache the results for performance reasons. Since they don’t have zone files that need synchronization, they can’t be secondary name servers.

The DNS Resolution Process

DNS is a distributed system, meaning that no single server is responsible for all domains. Instead, the domain space consists of multiple zones that form a tree structure. Each zone contains one or more domain names and one or more name servers responsible for it. If it isn’t responsible for a (sub)domain, it will contain a RR in its zone file that indicates another server is responsible for that domain. The name servers responsible for a zone are called the zone’s authoritative name servers. We’ve written an article that explains DNS zones in more detail; check it out if you want to learn more about zones.

The process of adding a new RR to the zone file and resolving it for a client is illustrated in Figure 2.

Figure 2: Adding and resolving a resource record
  1. A domain admin adds a new RR to the primary (authoritative) name server, for example, an A-record for the domain example.com.
  2. The secondary name servers either poll the primary servers for updates or are notified and download the updates via AXFR or IXFR.
  3. Secondary servers that can’t reach the primary server can receive updates from other secondary servers that are authoritative for their zone.
  4. An application—such as a browser—sends a query for the resolution of the example.com domain to the local resolver.
  5. The local resolver relays the query to a recursive name server (RNS) that relays them to authoritative servers which hold the zone files with the RRs.
  6. The RNS queries a root server, which only holds NS records for top-level domain (TLD) name servers. It returns the NS records for name servers authoritative for the com domain.
  7. The RNS queries the TLD server, which only holds NS records for the domains under the com TLD. It returns the NS records for the authoritative name servers of the example.com domain.
  8. The RNS queries one of the name servers responsible for example.com, in this case, a secondary name server. The server is chosen by round-robin. This server returns the data of the A or AAAA record for example.com. The RNS returns the data to the resolver, and the resolver returns it to the application.

How Do Secondary Name Servers Synchronize with a Primary Name Server?

The mechanism to keep name servers in sync is called a zone transfer. The secondary servers either poll other servers in their zone for updates or get notified by their primary server. Both polling and notification rely on the version number of a zone file.

Suppose the secondary name server sees the version has changed. In such a case, it will initialize a zone transfer with either the DNS zone transfer protocol (AXFR) or the incremental zone transfer protocol (IXFR) to fetch the latest RRs from the primary server or other secondary servers that are more up to date.

It is also possible to have multiple primary name servers, where each of their zone files are synchronized manually. In the context of secondary DNS, “manually” means the servers don’t synchronize via DNS-specific mechanisms. It’s possible to synchronize them by other automatic means, like Terraform scripts. To do so, a domain admin would update the zone file definition in the script, and Terraform would apply it to multiple primary name servers on a redeploy.

What Are the Benefits of Secondary DNS?

Now that we understand the zone transfer mechanism, let’s look at the benefits of secondary DNS beyond just automated synchronization.

Improved DNS Redundancy and Resiliency

Adding secondary name servers located in different data centers improves DNS redundancy. If one name server crashes or isn’t available for other reasons, clients can still use the remaining servers to resolve domain names.

Secondary name servers also improve DNS resiliency because they don’t just synchronize with the primary server but also with the other secondary servers in their zone. This way, updated RRs can still propagate to a secondary server that can’t connect to the primary server.

If a primary server is authoritative, it has to resolve queries and send updates to the secondary servers. Resolving client queries usually has the highest priority, so if the primary server load is too high, a secondary might get a notification but can initialize a zone transfer. If the zone transfer times out, the secondary server can ask other secondaries for the update and lower the load on the primary.

Low DNS Latency

If you branch your operations out to another continent, you can improve latency by deploying a secondary name server in that new location. If users are geographically dispersed, you can spread your secondary name servers all over the globe so that each user can use one close to them. With only a primary server, you must decide on one location that might deliver low latency resolution to only some of your users, and fail to provide low latency for others.

Clients select name servers via NS records and use round-robin to choose a new server for every subsequent query. This mechanism doesn’t help a client to find the server with the lowest latency, but you can add multiple servers to one NS record by giving them an Anycast IP address. Anycast routes a request to all servers that share an IP address but only returns the response that came back first to the client. This way, the clients always get the server with the lowest latency.

While Anycast isn’t directly related to DNS, it works hand in hand with the zone transfer mechanism. The secondary name servers keep each other synchronized with the primary server, and the Anycast protocol assigns each client the fastest server.

DNS Load Balancing

A zone file can have NS records for multiple name servers. The domain resolution process chooses one of these servers using a round-robin algorithm. In this algorithm, the client remembers which name server it already used and selects another one for the next query. This way, each subsequent query hits another server that can handle it while the previous one is still busy.

If your user base grows, your infrastructure must also support the additional load. The mantra of cloud is horizontal scaling, meaning that if the load rises, new servers must be added. So secondary DNS, which spreads the load over multiple name servers, works in the spirit of this mantra.

Improved Security for Primary Servers

In a typical DNS setup, you would add an NS record for your primary name server so that clients can query the primary name server directly. Clients use NS records to find name servers, but primary and secondary servers use other means of communication, so it’s possible to synchronize without any of them being present in an NS record. This means if you only add NS records for secondary servers, clients don’t know about your primary server. You can use the primary server to update your RRs and synchronize them with the secondary servers, while the secondary servers are responsible for resolving client queries.

With this technique, your primary server is hidden from the public, can focus on zone transfers, and is protected from potential attackers.

Leveraging Cloud DNS with On-Premises DNS Servers

Many organizations—especially bigger, more mature ones—already run their own name servers on-premises. Often, these are tightly coupled to the infrastructure with custom scripts and processes that create, update, and remove RRs.

Secondary name servers allow these organizations to keep their existing on-premises server as primary and add secondary name servers that run in the cloud. This way, they can benefit from the cloud and secondary DNS benefits with minimal changes to their on-premises infrastructure.

Conclusion

Your domains are the entry point to your websites and applications, so it’s crucial to ensure that users can always resolve them promptly. Secondary DNS helps you achieve this goal by letting you add extra name servers to your setup that synchronize with each other automatically. Each secondary name server acts like an authoritative name server and can resolve domain names just like your primary server, but without administrative overhead. When deployed smartly around the globe in tandem with technologies like Anycast, secondary DNS even boosts performance by lowering latency and can help to protect your primary servers from attacks.

Gcore’s DNS hosting allows you to set up zone transfers for secondary DNS with the open-source tool OctoDNS, so you get all the mentioned benefits without thinking about global deployments. Check out our docs to learn how to get started!

Related articles

How to cut egress costs and speed up delivery using Gcore CDN and Object Storage

If you’re serving static assets (images, videos, scripts, downloads) from object storage, you’re probably paying more than you need to, and your users may be waiting longer than they should.In this guide, we explain how to front your bucket with Gcore CDN to cache static assets, cut egress bandwidth costs, and get faster TTFB globally. We’ll walk through setup (public or private buckets), signed URL support, cache control best practices, debugging tips, and automation with the Gcore API or Terraform.Why bother?Serving directly from object storage hits your origin for every request and racks up egress charges. With a CDN in front, cached files are served from edge—faster for users, and cheaper for you.Lower TTFB, better UXWhen content is cached at the edge, it doesn’t have to travel across the planet to get to your user. Gcore CDN caches your assets at PoPs close to end users, so requests don’t hit origin unless necessary. Once cached, assets are delivered in a few milliseconds.Lower billsMost object storage providers charge $80–$120 per TB in egress fees. By fronting your storage with a CDN, you only pay egress once per edge location—then it’s all cache hits after that. If you’re using Gcore Storage and Gcore CDN, there’s zero egress fee between the two.Caching isn’t the only way you save. Gcore CDN can also compress eligible file types (like HTML, CSS, JavaScript, and JSON) on the fly, further shrinking bandwidth usage and speeding up file delivery—all without any changes to your storage setup.Less origin traffic and less data to transfer means smaller bills. And your storage bucket doesn’t get slammed under load during traffic spikes.Simple scaling, globallyThe CDN takes the hit, not your bucket. That means fewer rate-limit issues, smoother traffic spikes, and more reliable performance globally. Gcore CDN spans the globe, so you’re good whether your users are in Tokyo, Toronto, or Tel Aviv.Setup guide: Gcore CDN + Gcore Object StorageLet’s walk through configuring Gcore CDN to cache content from a storage bucket. This works with Gcore Object Storage and other S3-compatible services.Step 1: Prep your bucketPublic? Check files are publicly readable (via ACL or bucket policy).Private? Use Gcore’s AWS Signature V4 support—have your access key, secret, region, and bucket name ready.Gcore Object Storage URL format: https://<bucket-name>.<region>.cloud.gcore.lu/<object> Step 2: Create CDN resource (UI or API)In the Gcore Customer Portal:Go to CDN > Create CDN ResourceChoose "Accelerate and protect static assets"Set a CNAME (e.g. cdn.yoursite.com) if you want to use your domainConfigure origin:Public bucket: Choose None for authPrivate bucket: Choose AWS Signature V4, and enter credentialsChoose HTTPS as the origin protocolGcore will assign a *.gcdn.co domain. If you’re using a custom domain, add a CNAME: cdn.yoursite.com CNAME .gcdn.co Here’s how it works via Terraform: resource "gcore_cdn_resource" "cdn" { cname = "cdn.yoursite.com" origin_group_id = gcore_cdn_origingroup.origin.id origin_protocol = "HTTPS" } resource "gcore_cdn_origingroup" "origin" { name = "my-origin-group" origin { source = "mybucket.eu-west.cloud.gcore.lu" enabled = true } } Step 3: Set caching behaviorSet Cache-Control headers in your object metadata: Cache-Control: public, max-age=2592000 Too messy to handle in storage? Override cache logic in Gcore:Force TTLs by path or extensionIgnore or forward query strings in cache keyStrip cookies (if unnecessary for cache decisions)Pro tip: Use versioned file paths (/img/logo.v3.png) to bust cache safely.Secure access with signed URLsWant your assets to be private, but still edge-cacheable? Use Gcore’s Secure Token feature:Enable Secure Token in CDN settingsSet a secret keyGenerate time-limited tokens in your appPython example: import base64, hashlib, time secret = 'your_secret' path = '/videos/demo.mp4' expires = int(time.time()) + 3600 string = f"{expires}{path} {secret}" token = base64.urlsafe_b64encode(hashlib.md5(string.encode()).digest()).decode().strip('=') url = f"https://cdn.yoursite.com{path}?md5={token}&expires={expires}" Signed URLs are verified at the CDN edge. Invalid or expired? Blocked before origin is touched.Optional: Bind the token to an IP to prevent link sharing.Debug and cache tuneUse curl or browser devtools: curl -I https://cdn.yoursite.com/img/logo.png Look for:Cache: HIT or MISSCache-ControlX-Cached-SinceCache not working? Check for the following errors:Origin doesn’t return Cache-ControlCDN override TTL not appliedCache key includes query strings unintentionallyYou can trigger purges from the Gcore Customer Portal or automate them via the API using POST /cdn/purge. Choose one of three ways:Purge all: Clear the entire domain’s cache at once.Purge by URL: Target a specific full path (e.g., /images/logo.png).Purge by pattern: Target a set of files using a wildcard at the end of the pattern (e.g., /videos/*).Monitor and optimize at scaleAfter rollout:Watch origin bandwidth dropCheck hit ratio (aim for >90%)Audit latency (TTFB on HIT vs MISS)Consider logging using Gcore’s CDN logs uploader to analyze cache behavior, top requested paths, or cache churn rates.For maximum savings, combine Gcore Object Storage with Gcore CDN: egress traffic between them is 100% free. That means you can serve cached assets globally without paying a cent in bandwidth fees.Using external storage? You’ll still slash egress costs by caching at the edge and cutting direct origin traffic—but you’ll unlock the biggest savings when you stay inside the Gcore ecosystem.Save money and boost performance with GcoreStill serving assets direct from storage? You’re probably wasting money and compromising performance on the table. Front your bucket with Gcore CDN. Set smart cache headers or use overrides. Enable signed URLs if you need control. Monitor cache HITs and purge when needed. Automate the setup with Terraform. Done.Next steps:Create your CDN resourceUse private object storage with Signature V4Secure your CDN with signed URLsCreate a free CDN resource now

How do CDNs work?

Picture this: A visitor lands on your website excited to watch a video, buy an item, or explore your content. If your page loads too slowly, they may leave before it even loads completely. Every second matters when it comes to customer retention, engagement, and purchasing patterns.This is where a content delivery network (CDN) comes in, operating in the background to help end users access digital content quickly, securely, and without interruption. In this article, we’ll explain how a CDN works to optimize the delivery of websites, applications, media, and other online content, even during high-traffic spikes and cyberattacks. If you’re new to CDNs, you might want to check out our introductory article first.Key components of a CDNA CDN is a network of interconnected servers that work together to optimize content delivery. These servers communicate to guarantee that data reaches users as quickly and efficiently as possible. The core of a CDN consists of globally distributed edge servers, also known as points of presence (PoPs):Origin server: The central server where website data is stored. Content is distributed from the origin to other servers in the CDN to improve availability and performance.Points of presence (PoPs): A globally distributed network of edge servers. PoPs store cached content—pre-saved copies of web pages, images, videos, and other assets. By serving cached content from the nearest PoP to the user, the CDN reduces the distance data needs to travel, improving load times and minimizing strain on the origin server. The more PoPs a network has, the faster content is served globally.How a CDN delivers contentCDNs rely on edge servers to store content in a cache, enabling faster delivery to end users. The delivery process differs depending on whether the content is already cached or needs to be fetched from the origin server.A cache hit occurs when the requested content is already stored on a CDN’s edge server. Here’s the process:User requests content: When a user visits a website, their device sends a request to load the necessary content.Closest edge server responds: The CDN routes the request to the nearest edge server to the user, minimizing travel time.Content delivered: The edge server delivers the cached content directly to the user. This is faster because:The distance between the user and the server is shorter.The edge server has already optimized the content for delivery.What happens during a cache miss?A cache miss occurs when the requested content is not yet stored on the edge server. In this case, the CDN fetches the content from the origin server and then updates its cache:User requests content: The process begins when a user’s device sends a request to load website content.The closest server responds: As usual, the CDN routes the request to the nearest edge server.Request to the origin server: If the content isn’t cached, the CDN fetches it from the origin server, which houses the original website data. The edge server then delivers it to the user.Content cached on edge servers: After retrieving the content, the edge server stores a copy in its cache. This ensures that future requests for the same content can be delivered quickly without returning to the origin server.Do you need a CDN?Behind every fast, reliable website is a series of split-second processes working to optimize content delivery. A CDN caches content closer to users, balances traffic across multiple servers, and intelligently routes requests to deliver smooth performance. This reduces latency, prevents downtime, and strengthens security—all critical for businesses serving global audiences.Whether you’re running an e-commerce platform, a streaming service, or a high-traffic website, a CDN ensures your content is delivered quickly, securely, and without interruption, no matter where your users are or how much demand your site experiences.Take your website’s performance to the next level with Gcore CDN. Powered by a global network of over 180+ points of presence, our CDN enables lightning-fast content delivery, robust security, and unparalleled reliability. Don’t let slow load times or security risks hold you back. Contact our team today to learn how Gcore can elevate your online presence.Discover Gcore CDN

What is a CDN?

Whether you’re running an e-commerce store, streaming videos, or managing an app, delivering content quickly and reliably is essential to keeping users satisfied. This is where a content delivery network (CDN) comes into play. A CDN is a globally distributed network of servers that work together to deliver content to users quickly, minimizing latency. Instead of relying on a single server, a CDN uses edge servers—called points of presence (PoPs)—to cache or temporarily store copies of your content closer to the user. This optimizes website performance, drastically cuts down on load times, and improves the user experience. Research suggests that a one-second lag in page loading speed can significantly decrease engagement, citing a 7% decline in conversions and an 11% decrease in page visits. CDNs considerably speed up load times by reducing latency through content caching closer to the user. By splitting up your website’s traffic over several servers, CDNs also protect it from online threats. Distributed denial-of-service (DDoS) attacks are lessened by CDNs because they spread traffic among a network of servers, improving security and availability. What Challenges Do CDNs Address?CDNs tackle two key challenges to improve website and application performance: Slow load times: Users sometimes experience frustratingly slow-loading websites and applications. This is because data must travel from a server to the end user’s device, causing latency. CDNs move servers closer to end users, reducing the distance that data has to travel and speeding up load times.  High traffic volumes: High traffic volumes during peak times or cyberattacks can overwhelm your website and lead to latency or site unavailability. Since CDNs distribute traffic across multiple servers, no single server is overwhelmed. This helps prevent crashes and delivers smooth performance for all users.Common Use Cases for CDNsCDNs are vital across a range of industries, providing measurable improvements in content delivery and user experience. E-commerce websites use CDNs to guarantee quick page loading and frictionless shopping experiences, even during periods of high traffic. Speed is crucial for online businesses. A study found that the average cost of downtime for e-commerce websites is around $500,000 per hour. This includes lost sales, operational costs, and long-term damage to brand reputation Streaming services rely on CDNs to deliver high-quality video content while minimizing buffering. Netflix states that its CDN contributes to the daily delivery of over 125 million hours of streaming content, guaranteeing a seamless experience for customers worldwide. Gaming companies use CDNs to lower latency and provide a consistent real-time user experience, especially during live multiplayer matches, where it is essential to preserve an engaging and fair gameplay experience. News outlets and blogs benefit from CDNs by ensuring their content loads quickly for readers around the world, during large-scale traffic surges, especially during major events like elections or breaking news.  The Benefits of a CDNFaster Website PerformanceEvery second counts when delivering content online. Slow websites frustrate users and harm your business. CDNs speed up content delivery by caching data closer to users, reducing page and file load times. Whether you’re delivering static content (such as CSS, HTML or JPG files) or dynamic content (like data generated by user interactions or API calls), a CDN ensures optimal performance regardless of user location. While factors like DNS settings, server configurations, and code optimization all play a role, the physical distance between your origin server and your users is a factor that only a CDN can solve. Increased Availability and ReliabilityDowntime can seriously affect online businesses. Hardware failures, traffic surges, and cyberattacks can reduce your website’s availability, harming your customers’ experience and causing financial or reputational damage. In fact, around 98% of organizations report that just one hour of downtime costs over $100,000. A CDN ensures that your website remains available, fast, and reliable by leveraging essential features such as: Load balancing: This process dynamically distributes traffic across multiple servers to optimize performance and prevent overload.Intelligent failover: Automatically redirects traffic if a server goes offline, ensuring continuity with minimal disruption.Anycast routing: Directs users to the closest or most efficient server, further reducing latency and enhancing response times.Security FeaturesAs cyber threats continue to grow in sophistication and frequency, securing your website or application is more critical than ever. According to recent statistics from Cobalt’s 2024 Cybersecurity Report, weekly attacks worldwide increased by 8% in 2023, while attackers used more sophisticated strategies to exploit vulnerabilities. Strong security measures that not only safeguard your website but also guarantee optimal performance are necessary in light of these evolving threats. CDN security features not only improve website performance but also defend against a wide range of attacks by distributing traffic across multiple servers, which mitigates DDoS attacks and filters out malicious traffic before it reaches your website. These features, from DDoS protection to safeguarding APIs, help maintain uptime, protect sensitive data, and guarantee a seamless user experience. Most modern solutions like Gcore CDN integrate robust security measures into content delivery, such as:SSL/TLS encryption facilitates secure data transmission by encrypting traffic, protecting sensitive information from being intercepted.L3/L4 DDoS protection blocks large-scale cyberattacks designed to flood your network and disrupt services.L7 DDoS protection guards your website from more complex attacks targeting how the website functions, helping it continue to operate smoothly.Web application firewall (WAF) acts as a shield, blocking harmful traffic such as hacking attempts or malicious scripts before they can affect your site.API security protects the communication between your application and other software, preventing unauthorized access or data theft.Bot protection identifies harmful automated traffic (bots), preventing activities like data scraping or login attempts with stolen credentials while allowing useful bots (like search engine crawlers) to function normally. Elevate Your Online Experience With a CDNA CDN is no longer a luxury—it’s a necessity for businesses that want to deliver fast, reliable, and secure online experiences. Whether your goal is to optimize performance, manage high traffic, or protect your site from attacks, a well-configured CDN makes all the difference.Ready to enhance your website’s performance? Our futureproof CDN runs on a global network of over 180 points of presence, so your customers get outstanding performance no matter where in the world they’re located. Get in touch with our team today to learn how our CDN can benefit your business.Discover Gcore CDN

How to Migrate Your Video Files to Gcore Video Streaming

Migrating large volumes of video files from different platforms can be daunting and time-consuming, often discouraging companies from moving to a superior provider. But it doesn’t have to be this way. We’ve created this three-step guide to help you efficiently migrate your video files to Gcore from other popular streaming platforms.Step 1: Get Links to Your VideosFirst, obtain links to your videos and download them. Look for your provider in the list below, or refer to the general SFTP/S3 storage section if applicable. After completing the steps for your provider, go straight to step 2.Google DriveShare the file: Open Google Drive and locate the MP4 file you want to download. Right-click on the file and select “Share.”Get the shareable link: In the sharing settings, click “Get link.” Ensure the link-sharing option is turned on.Set sharing permissions: Adjust the sharing permissions so “Anyone with the link” can view or download the file. Copy the generated link.Amazon S3Edit S3 block public access settings: Go to the S3 management console, select the bucket containing your MP4 file, and edit the Block Public Access settings if necessary.Add a bucket policy: Implement a bucket policy that grants public read access to your files.Get the list of objects: Navigate to the Objects tab, find your MP4 file, and click on the file to obtain the Object URL, which will be your download link.VimeoAccess the video: Log in to your Vimeo account and go to the video you wish to download.Select options: Click on the “Settings” button (gear icon) below the video player.Get video file link: In the settings menu, go to the “Video File” tab, where you can find the download link for your MP4 file.MUXEnable master access: Log in to your MUX account, navigate to the video asset, and enable master access if it’s not already enabled.Retrieve URL to master: Once master access is enabled, the URL to the master file will be available in the video asset details. Copy this URL for downloading the file.DropboxCreate a shareable link: Log in to your Dropbox account and locate the MP4 file you want to share. Click on the “Share” button next to the file.Set access permissions: In the sharing settings, create a link and set the permissions to “Anyone with the link.” Copy the generated link to download the file.General SFTP or S3 StorageAccess storage: Log in to your SFTP or S3 storage service control panel.Manage buckets/directories: Navigate to the appropriate bucket or directory containing your MP4 files.Retrieve download links: Generate HTTP/S links for the files you want to download. You can then use these links to download the files directly.Step 2: Check Availability to DownloadEnsure that your video files are available and ready for download, preventing any interruptions or issues during the migration process.Open HTTP/S link in a browser: Copy the HTTP/S link for the MP4 file and paste it into your browser’s address bar. Press Enter to navigate to the link.Check the video plays correctly in the browser: Verify that the video starts playing once the link is opened. This step ensures that the file is accessible and the link is functioning properly.Right-click to download: While the video is playing, right-click on the video player. Select “Save video as…” from the context menu. Choose a destination on your local disk to save the MP4 file.Step 3: Upload to Gcore Video StreamingNo matter which provider you’re migrating from, you need to upload your videos to Gcore Video Streaming storage. There are three primary methods to upload videos to Gcore storage:Copy from external storage: If your videos are available via public HTTPS URLs, you can directly copy the video files from external storage to Gcore. This method efficiently transfers files without downloading them to your local device first.Upload from a local device: Videos can be uploaded from your local host, backend, browser, or mobile app using the TUS resumable upload protocol. This method is resilient to interruptions, ensuring a smooth upload process by resuming from the point of failure.Batch upload: This method will soon be available to migrate extensive collections of videos, allowing you to transfer vast numbers of video files efficiently.The simplest migration option is to obtain video URLs and copy them to Gcore Video Hosting, eliminating the need to download and reupload videos.Example API Request to Copy Video from External StorageTo copy a video from another server, specify the origin_url attribute in the POST API request. The original video will be downloaded for video hosting on our server. Here is an example of the API request to set a task for copying a video from external storage:curl -L 'https://api.gcore.com/streaming/videos/' \-H 'Content-Type: application/json' \-H 'Authorization: APIKey 1234$0d16599c' \-d '{ "video": { "name": "Gcore Demo", "description": "Video copied from an external S3 Storage", "origin_url": "https://s-ed1.cloud.gcore.lu/demo-video/gcore.mp4" } }Refer to the complete documentation for detailed steps and examples of API requests. The original file must be in MP4 format or one of the following formats: 3g2, 3gp, asf, avi, dif, dv, flv, f4v, m4v, mov, mp4, mpeg, mpg, mts, m2t, m2ts, qt, wmv, vob, mkv, ogv, webm, vob, ogg, mxf, quicktime, x-ms-wmv, mpeg-tts, vnd.dlna.mpeg-tts. Streaming formats like HLS (.m3u8/.ts) and DASH (.mpd/.m4v) are intended for final video distribution and cannot be used as original file formats. Here are examples of good and bad links:Good link: https://demo-files.gvideo.io/gcore.mp4Bad link (chunked HLS format): https://demo-files.gvideo.io/hls/master.m3u8Note: Currently, only one video can be uploaded per request, so transferring your library in batches will require automation.Migrate to Gcore Video Streaming TodayGcore Video Streaming makes video migration easy with support for multiple sources and automatic transcoding. Whether you’re moving files from cloud storage, hosting platforms, or API-based services, Gcore streamlines video administration. Store, process, and distribute videos in various formats, complete with features like subtitles and timeline previews.With seamless migration and automatic transcoding, Gcore ensures your videos are optimized and ready for distribution, saving you time and effort. Simplify your video management and ensure your content is always accessible and in the best format for your audience with Gcore’s robust video streaming solutions.

5 Ways to Improve Website Speed for E-Commerce

In part 1 of this guide, we explained why site speed matters for e-commerce and how you can track your current speed.Now, speed up your page load times with these five techniques.#1 Assess Your Current Site SpeedFirst, check your site’s current performance. Use tools like Google PageSpeed Insights or real user monitoring (RUM) tools. PageSpeed Insights evaluates individual web pages for mobile and desktop performance, providing actionable insights to improve speed and user experience.Here’s an example of how your metrics might look:#2 Adopt Code and Image Optimization TechniquesE-commerce websites often have a huge number of images, videos, and/or animations, which can slow down load times. Since these media are essential, the key is to optimize all heavy components.Compress images and use lazy loading via your website host. Minimize redirects and remove broken links, consulting a technical SEO expert if required. These actions can significantly reduce page weight.#3 Adopt CDNs and Edge ComputingThe majority of online shoppers have purchased from an e-commerce store in another country and an additional 22% plan to in the future. Hosting location impacts speed. The further your servers are physically located from your customers, the higher the latency. So, having servers distributed globally improves your load speed and allows you to deliver great customer experiences, no matter where your customers are located.Imagine that your e-commerce website is hosted on a web server in the US, but you have shoppers from the EU. When shoppers from the US browse your store, they may not experience much latency. But shoppers from Germany will, because of the time it takes their browser to send requests to your US server, wait for the server to process them, and deliver a response. A reliable CDN and edge computing provider caching your website content—images, videos, payment portals and all—at the edge makes for speedy content delivery globally.In addition to shortening the distance between your servers and buyers, CDNs also enable load balancing. Say you’re running a Black Friday sale with traffic surges far beyond your normal quantities. Your CDN provider can distribute the traffic evenly between its network of available servers, preventing any one server from being overworked, thereby improving server response times. So, if your Black Friday surge comes mostly from the New York area, a CDN can push some of that traffic from the New York, NY server to the nearby Manassas, VA and Boston, CT servers. Customers won’t notice a difference since both servers are nearby, but spreading the load means all servers continue to perform optimally.#4 Use Fast Authoritative DNSDNS is like the internet’s phone book, translating human-friendly domain names (like www.example.com) into IP addresses that computers use to find each other. When this translation happens quickly, it reduces the time it takes for a user’s browser to locate your website, leading to faster page load times.#5 Rinse and RepeatSite speed optimization is continuous. The internet changes daily; technology advances and competitors emerge. Don’t get comfortable with your site speed. Continuously track speed scores and make improvements.Website Speed Solutions in One Intuitive PlatformWebsite speed is a game-changer for e-commerce success. A website that loads in under a second is the magic number to boost user experience, slash bounce rates, and skyrocket your e-commerce business’ conversion rates.Stay ahead of your e-commerce competitors by choosing tools and platforms designed with your e-commerce website speed in mind. With 180+ PoPs worldwide and a 200+ Tbps network capacity, Gcore CDN and DNS are ideal speed optimization solutions for global e-commerce sites. Contact us today to discover how we can supercharge your site speed.Explore CDN for e-commerce

What Website Speed Is and Why It Matters for E-commerce Success

Website visitors are more impatient than ever—websites that take longer than three seconds to load lose more than half their visitors. For an e-commerce business, that translates to losing half its potential sales, which is bad news for revenue. In this article, we explain what e-commerce website speed is, how it’s measured, and how you can improve it for better customer retention and higher sales.Why Does Site Speed Matter?Website speed measures the time from when visitors click your link to when they see a fully functioning page. With the surge in e-commerce businesses around the world, buyers have many choices and will quickly abandon slow-loading websites out of frustration. Most customers won’t return to a slow website, and 89% will turn to a faster competitor. Satisfied customers are more likely to recommend your website to others, making high user satisfaction an effective marketing strategy.Just a second—or less—of load time can make the difference between a potential customer purchasing from you or your competitor. Conversion rates drop markedly with every additional second of load time. If your site loads in one second or less, you’re looking at a 3% conversion rate. That almost halves when you add just one second of wait time.That’s not surprising, since churn and bounce rates increase with slower load times, meaning potential buyers either leave your site before interacting and/or don’t return.Page load times also affect search engine optimization (SEO) rankings—your spot on search engine results pages. When buyers search for your products, if you don’t appear at the top, your competitors will—and your customers are more likely to visit their site instead of yours.Evidently, optimizing page load time is a non-negotiable for any e-commerce business.Metrics and Indicators to TrackSpeed can be measured and reflected by either technical or business metrics.Technical IndicatorsGoogle Core Web Vitals are metrics that measure various features contributing to a high-quality page experience. They’re an industry-standard way to measure your technical website load speed.Largest contentful paint (LCP) is the time it takes for the largest content on your site to load. An ideal LCP value is below 2.5 seconds, while above 4 seconds signals a poor page experience.First input delay (FID) is the delay between a user’s interaction (e.g., clicking a button) and the browser’s response. Google considers any FID value below 100 ms good, and above 300 ms poor.Cumulative layout shift (CLS) measures how much your content moves around while loading. Poor CLS can cause users to accidentally click on the wrong buttons.Keep track of the following additional technical metrics:Time to first byte (TTFB) is the time between a browser requesting your webpage and the first byte of data arriving. It often triggers the “reduce initial server response time” message in page speed diagnostics.Time to interactive (TTI) measures the time it takes for your website to become fully interactive. Google considers a TTI of below 5 seconds good, and above 7.3 seconds poor.Round-trip time (RTT) is the time it takes for requests to reach the origin server, be processed, and return to the client.Business MetricsThese metrics give you insights into how your website’s speed impacts sales. Although they’re not a direct speed measurement, speed has a direct impact on them.Conversion rate measures the percentage of your website’s visitors who make a purchase.Engagement time measures how much time customers actively spend on your website, such as browsing products or making a purchase. It’s connected to bounce rate, which is the opposite—how many customers leave your site without engaging at all, often caused by slow loading.Search ranking affects your site’s visibility, traffic, and revenue. Fast load times contribute to better SEO rankings.Explore part 2 of this guide to discover 5 practical tips to speed up your e-commerce website performance.

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.