The origins of Public Key Infrastructure (PKI) date back to the 1970s and research at UK intelligence agency GCHQ, though it didn’t emerge from the secret world and take off commercially until the 1990s.
PKI still underlies a great deal of modern cryptography, so we spoke to Ryan Yackel, VP product marketing at Keyfactor, to find out more about it and why it isn’t going away any time soon.
BN: What is public key infrastructure (PKI) and what is it used for?
RY: PKI governs the issuance of the digital certificates that are used to protect sensitive data; provide unique digital identities for users, devices, and applications; and deliver secure end-to-end communications.
Organizations rely on PKI to manage security through data encryption. The most common form of encryption used today involves a public key. Anyone can use this to encrypt a message and a private key, which only one person should be able to use to decrypt the message. The keys can be used by people, devices, and applications.
PKI began being used in the 1990s to help govern encryption keys through the issuance and management of digital certificates. The certificates verify the owner of a private key to help maintain security, essentially acting as the electronic equivalent of a driver’s license or passport. They contain information about an individual or entity; are issued from a trusted third party; are tamper-resistant; contain information that can prove their authenticity; can be traced back to the issuers; have an expiration date; and are presented to someone (or something) for validation.
A common example of PKI security today is secure socket layer (SSL) certificates on web sites that enable site visitors to know they’re sending information to the intended recipient. Additional examples include digital signatures and authentication for the growing number of Internet of Things (IoT) devices in modern environments.
BN: What’s the difference between traditional and decentralized PKI?
RY: Behind every digital certificate is a certificate authority (CA). Microsoft Active Directory Certificate Services (ADCS), often referred to as Microsoft CA, have long been the CA of choice for many organizations. It’s integrated well with Microsoft infrastructure and supports standard use cases such as user and device authentication.
The move to the cloud introduces new challenges, however. For instance, most on-premises PKI deployments were not designed to handle the volume and velocity of certificate usage today. There’s also the issue of a lack of out-of-the-box integrations with non-Microsoft infrastructure and commonly overlooked misconfigurations that can lead to security risks.
Many organizations have outgrown their traditional PKI, and as a result, need to rebuild or redesign to support this new reality. PKI no longer consists of just one or two CAs within a data center. Today’s hybrid and multi-cloud environments involve various public, private, open-source, and cloud-based CAs, each implemented by different teams to meet specific use cases.
What organizations need is a decentralized PKI model that acts as a web of trust extending across on-premises and cloud environments.
BN: What factors are driving decentralized PKI?
RY: A number of factors are driving the shift to decentralized PKI. One is hybrid trust. Many organizations rely on a mix of trusted third-party CAs and internal private CAs to meet trust models within and outside the organization. They’re using multiple cloud services, such as AWS, Azure, and Google Cloud Platform, each with their own built-in capabilities for certificate issuance.
Another is availability. Uptime is critical for business, and decentralized PKI infrastructure is deployed in clustered, geo-redundant or high-availability architectures to avoid a single point of failure and ensure uptime for certificate revocation and issuance.
Also driving the trend are specialized use cases. For example, continuous integration/continuous delivery (CI/CD) toolchains and containerized environments require short-lived SSL/TLS certificates versus traditional web servers and devices that might leverage one or two-year certificates.
Dispersed teams are another factor. Different teams and departments across the organization prefer different CAs due to cost, requirements, certificate types, assurance levels, etc. A lack of integrations can also drive decentralized PKI. ADCS is well-suited for Microsoft infrastructure, but it does not offer native support for other applications. This creates a heavy burden on teams to develop homegrown scripts and tracking mechanisms.
Finally, there’s business growth. Mergers and acquisitions in high-growth companies result in mixed CA environments, often with conflicting rules and security policies.
BN: What are best practices for implementing modern PKI?
RY: When determining how to best deploy and use decentralized PKI, organizations need to consider several key factors. One is trust requirements. Organizations need to determine where public and private certificates are best suited on a case-by-case basis to avoid blurring trust boundaries. Then they must consider the PKI infrastructure needed to support this trust model and how they can delegate and manage trust across different siloes.
Another is assurance levels. Companies should consider the physical and digital safeguards around their root and issuing CAs. For testing purposes, it might be acceptable to issue certificates from a low-assurance PKI, whereas production certificates require higher levels of assurance.
Yet another factor is required expertise. Does the organization have the right expertise, bandwidth, hardware, and security controls in place to implement a secure internal PKI and maintain it over a 10-to 20-year lifespan?
Organizations also need to identify use cases across infrastructure, security, network, and application teams. They must determine the certificate types, templates, issuance volume, protocols, integrations, and a number of additional factors and capabilities that these teams will need to support their specific use cases.
Most importantly, they need to have a high level of crypto-agility to prepare for the eventual migration to new key sizes and algorithms. The ability to revoke and re-issue certificates at massive scale is critical for success.
BN: What are the most important PKI metrics for security teams to track?
RY: When it comes to PKI, visibility is extremely important. If an organization doesn’t know the state of its cryptography, PKI operations become exceedingly difficult. And when it comes to visibility, there are 10 important metrics to track.
These include expiration status (when certificates are going to expire and who needs to be notified before that happens); key size and strength (to determine if keys are weak and avoid becoming vulnerable to an attack as a result); signing algorithms (the foundation of trust and security for PKI); CA issuance (knowing all the CAs the organization uses and keeping tabs on the certificates that they issue); certificate requesters and owners (who’s requesting certificates and who owns them); self-signed certificates (understanding where they exist); wildcard certificates (certificates used on multiple subdomains); automated vs. manual certificates (which certificates have automated deployments and renewals and which require manual updates); certificate revocation lists health (whether or not a certificate is still valid); and unknown certificates (finding all the certificates in the environment that the organization doesn’t know about).
While there are many more PKI metrics to track, together, these provide a great deal of visibility that can help teams achieve PKI success.
Image Credit: Maksim Kabakou / Shutterstock