In this Forefront interview, Bob Messner, a seasoned IT leader with over 25 years in higher education and a CISSP-certified cybersecurity expert, delves into the evolving landscape of institutional cybersecurity. From data protection strategies and breach preparedness to the impact of AI and the latest regulations like the Gramm-Leach-Bliley Act, Bob unpacks the essentials for safeguarding higher education. Don’t miss his insights on staying ahead of emerging threats and building a resilient digital foundation.
Data is the cornerstone of every higher education institution. It encompasses everything from student records and research data to operational and financial information. Unlike hardware or infrastructure, data cannot simply be replaced—it is unique, irreplaceable, and holds the key to an institution's success. Losing data is not just a technical issue; it’s an existential threat that can cripple trust, reputation, and functionality.
Understanding the critical importance of data is the first step in effective cybersecurity. Institutions must treat data as their most valuable asset, prioritizing its protection at every stage of its lifecycle. This begins with data classification—a process that determines whether information is for public, private, or confidential use. Clear classifications guide how data is accessed, shared, and stored, ensuring it is handled appropriately at all times.
Retention policies are another essential element. Institutions must define how long data is stored, balancing operational needs with regulatory compliance. Holding onto data longer than necessary increases exposure to breaches, while premature deletion can hinder operations or violate legal requirements.
Moreover, access controls play a critical role in safeguarding data. By enforcing strict protocols, institutions can limit access to authorized individuals only. This minimizes opportunities for unauthorized use or breaches, ensuring sensitive information remains secure.
Effective data protection also requires an overarching strategy that includes secure storage methods and routine audits. Institutions must invest in secure systems and regularly evaluate their data management practices. This helps to ensure compliance with emerging regulations and identifies vulnerabilities before they can be exploited.
As Bob Messner succinctly puts it, "When you lose your data, you lose the most valuable asset your organization has." By adopting a proactive approach to data management—classification, retention, access control, and ongoing evaluation—institutions can safeguard their data, uphold their mission, and build a foundation for resilience in an increasingly digital world.
Effective cybersecurity relies on robust policies that guide institutions in safeguarding data and resources. For higher education, where sensitive information flows constantly, these policies are critical.
The Acceptable Use Policy (AUP) defines how institutional hardware, software, and data may be used, ensuring responsible access and preventing unauthorized activities. It provides clarity on what is permitted and what is not, protecting both users and the institution.
Equally important is the Privacy Policy, which informs external stakeholders—students, parents, and others—how their personal data is collected, stored, and protected. This transparency builds trust and demonstrates the institution’s commitment to safeguarding sensitive information.
The Data Governance Framework ties these policies together, managing data throughout its lifecycle. It defines access, retention, and security protocols, supported by regular audits to ensure compliance and identify risks. These frameworks are essential for adapting to emerging threats and maintaining regulatory compliance.
Policies must also evolve to address modern challenges, such as AI-driven threats like highly convincing phishing emails. Institutions must regularly update guidelines and educate faculty, staff, and students about these new risks, ensuring they are prepared to recognize and mitigate potential vulnerabilities.
Vendor and third-party management is another vital component. Institutions must assess their partners’ cybersecurity measures to prevent breaches stemming from external vulnerabilities. For example, the Moveit breach highlighted the risks of insufficient oversight, where third-party issues impacted educational institutions.
Enforcing and embedding these policies into the institution’s culture through training and regular updates ensures they remain effective. Policies are more than just rules—they are the foundation of a resilient cybersecurity strategy, equipping institutions to protect their most valuable asset: data. With clear, actionable guidelines, everyone within the organization plays a role in safeguarding its future.
Securing access to data is one of the most fundamental principles of cybersecurity. Effective access control ensures that only authorized users can reach sensitive information, while minimizing the risk of breaches or misuse. A critical strategy for this is the principle of least privilege, which restricts users’ access to only the data and systems they need for their roles. This approach reduces the risk of accidental or intentional misuse, especially as users change roles and their access requirements evolve. Regular audits are essential to ensure users do not accumulate unnecessary privileges over time.
Beyond controlling access, institutions must secure the endpoints—the laptops, desktops, and mobile devices used to access data. Endpoint security involves using antivirus software, intrusion detection systems, and patch management to protect devices from malware, Trojans, and other threats. These vulnerabilities can serve as entry points for bad actors seeking to steal or corrupt institutional data.
Equally important is network monitoring, which identifies anomalies in how data flows across systems. Advanced tools like Managed Detection and Response (MDR) solutions can detect unusual activity, such as an endpoint connecting to a server for the first time. Such anomalies may signal unauthorized access attempts, and AI-powered tools are increasingly being used to spot and respond to these risks in real time.
By combining robust access control policies, endpoint security measures, and vigilant network monitoring, institutions can create a multi-layered defense system that proactively protects their critical data from evolving cybersecurity threats.
Artificial intelligence (AI) is revolutionizing cybersecurity, offering powerful tools for both protection and attack. Its integration into everyday life—from adaptive cruise control in cars to virtual assistants like Alexa—demonstrates its transformative potential. In cybersecurity, AI is used to enhance anomaly detection, monitor network traffic, and identify unusual activity, such as unauthorized access attempts or suspicious patterns. These advancements make AI an indispensable tool for safeguarding sensitive data.
However, the same technology that strengthens defenses is being weaponized by bad actors. Phishing emails, once riddled with typos and poor grammar, have become indistinguishable from legitimate communications thanks to AI-driven text generation. Malicious actors are using AI to refine their strategies, hiding harmful links and crafting convincing messages that trick even the most cautious users.
This dual nature of AI underscores the need for vigilance and proactive measures. Institutions must leverage AI to stay ahead of evolving threats while also remaining aware of how it can be misused. Tools like AI-powered anomaly detection and endpoint monitoring allow organizations to quickly identify and respond to potential breaches, minimizing damage and protecting critical data.
As AI continues to evolve, so do the threats it poses and mitigates. Institutions must remain committed to understanding this technology and deploying it strategically, ensuring it serves as a shield rather than a vulnerability. The key is balancing innovation with caution, recognizing that AI’s benefits must always be paired with thoughtful implementation and rigorous oversight.
No organization is immune to the possibility of a data breach, making preparation critical. Effective breach planning begins with a robust Incident Response Plan (IRP). This plan outlines roles and responsibilities during a suspected breach, ensuring a coordinated and swift response to mitigate damage. Knowing who is accountable for actions, such as shutting down compromised systems or notifying stakeholders, is key to minimizing disruption.
Alongside the IRP, organizations need a Business Continuity Plan (BCP). When a breach occurs, the BCP focuses on restoring access to data for authorized users while blocking unauthorized access. This often involves reconfiguring networks or standing up alternative systems. The shift to remote work during the pandemic, with widespread adoption of Virtual Private Networks (VPNs), has amplified the importance of securing off-campus access points, which can become prime targets for bad actors.
A Continuity of Operations Plan (COOP) adds an additional layer of preparation, addressing non-technical services critical to institutional functioning. This includes maintaining payroll, ensuring public safety, and sustaining financial operations in the wake of a cyber incident.
Compliance with breach notification regulations is another vital component. Different states impose varied requirements regarding whom to notify and how quickly after confirming a breach. For example, in Pennsylvania, recent changes lowered the threshold for reporting breaches from 1,000 impacted individuals to just 500, highlighting the increasing focus on transparency and accountability.
Planning for breaches is not about avoiding them entirely but ensuring the organization is equipped to respond effectively, minimize impact, and maintain trust.
The regulatory landscape surrounding cybersecurity is rapidly changing, with laws like the Gramm-Leach-Bliley Act (GLBA) and the Federal Trade Commission's Safeguards Rule shaping how organizations protect sensitive data. These regulations emphasize a proactive approach to data security and demand compliance with stringent measures to safeguard information.
A key element of the Safeguards Rule is the broad definition of systems covered under these regulations. It applies not only to systems that store sensitive data but also to those connected to such systems. In an increasingly interconnected digital environment, this definition significantly expands the scope of compliance. Institutions must carefully interpret and apply this rule, as ambiguity remains regarding what constitutes "connected systems." Some organizations, like Bob Messner's, choose to define this narrowly by focusing on systems that actively access sensitive data.
Regulations also mandate the appointment of a Qualified Individual to oversee the information security program. For institutions with more than 500 employees, this individual must present an annual security report to the board of trustees, reinforcing accountability at the highest organizational levels.
Another key focus is data retention and destruction policies, requiring organizations to retain sensitive data only for as long as there is a legitimate business need and securely dispose of it afterward. Additionally, vendor management has become crucial, as third-party breaches increasingly contribute to data compromises. Organizations must ensure vendors meet security requirements through measures like SOC Type 2 reports and clear contractual agreements outlining breach responsibilities.
Vendor management has emerged as a critical area in cybersecurity, with third-party risks now accounting for a significant portion of data breaches. Organizations often rely on external vendors to handle or process sensitive data, but these relationships can expose institutions to vulnerabilities outside their immediate control. Bob Messner highlights the importance of robust vendor oversight to mitigate these risks effectively.
One key requirement under the Federal Trade Commission's Safeguards Rule is establishing a third-party vendor management program. This involves annual reviews to assess how vendors safeguard the data entrusted to them. Institutions can use tools like SOC Type 2 reports, which detail a vendor’s security measures, any breaches they’ve experienced, and the safeguards they’ve implemented in response. This transparency allows organizations to make informed decisions about their partnerships.
Another useful tool is the Higher Education Community Vendor Assessment Toolkit (HECVAT), which asks vendors a series of security-related questions. This provides additional assurance about a vendor's cybersecurity practices. Contracts with vendors should also include explicit clauses detailing breach notification timelines, responsibilities in case of a breach, and the specific safeguards to be employed.
The importance of vendor management became starkly evident in incidents like the 2024 breach involving MoveIt, a third-party vendor used by the National Student Clearinghouse. Although the breach originated outside the institution, it still impacted the institutions that relied on the clearinghouse, illustrating the ripple effect of third-party vulnerabilities.
Effective vendor management isn't just about compliance—it’s about protecting sensitive data and maintaining trust in an increasingly interconnected environment.
Data retention and destruction are critical components of cybersecurity, addressing how long data should be stored and how it is safely eliminated when no longer needed. Bob Messner emphasizes that organizations must strike a balance between operational needs and safeguarding sensitive information.
Under the Federal Trade Commission’s Safeguards Rule, organizations are required to establish clear data retention policies. These policies must define how long sensitive information is retained and ensure it aligns with business needs. Messner highlights that this approach mirrors the General Data Protection Regulation (GDPR) in the European Union, which mandates retaining data only as long as it serves a specific purpose. Although U.S. regulations do not define "business need" as strictly, the trend points toward increasing specificity in how organizations handle retention periods.
Equally important is the secure destruction of data once its retention period ends. Sensitive data, if not properly destroyed, poses significant risks, as it could be accessed by unauthorized parties. Methods of destruction should be thorough and compliant with regulatory guidelines, ensuring that sensitive information cannot be reconstructed.
Messner also stresses the evolving nature of data retention requirements. For instance, organizations must adapt their practices to meet legal and operational demands, such as those imposed by FERPA in higher education. The integration of clear and enforceable data retention and destruction policies safeguards institutions against breaches and ensures compliance with growing regulatory expectations.
By implementing robust retention and destruction frameworks, institutions can enhance their cybersecurity posture while fostering trust with stakeholders.
As technology evolves, so do cybersecurity challenges. Institutions must adopt a proactive, multi-layered approach to protect their data and systems. By focusing on robust policies, access controls, vendor management, and compliance, higher education can meet emerging threats head-on. Messner’s insights provide a practical framework for institutions to safeguard their most valuable asset—data—while fostering innovation and resilience.
To learn more about how your institution can navigate the complexities of cybersecurity, get in touch with Doctums experts now.
[Get in Touch]