Lighthouse Blog

Read the latest insights from industry experts on the rapidly evolving legal and technology landscapes with topics including strategic and technology-driven approaches to eDiscovery, innovation in artificial intelligence and analytics, modern data challenges, and more.

Get the latest insights

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Filter by trending topics
Select filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Blog

Self-Service eDiscovery: Who’s Really in Control of Your Data?

self-service, spectra as a topic has grown significantly in the recent past. With data proliferating at astronomical amounts year over year it makes sense that corporations and firms are wanting increasing control over this process and its cost. Utilizing a self-service, spectra eDiscovery tool is helpful if you want control over your queue as well as your hosted footprint. It is beneficial if your team has an interest and the capability of doing your own ECA. Additionally, self-service, spectra options are useful as they provide insight into specific reporting that you may or may not be currently receiving.Initially, the self-service, spectra model was introduced to serve part of the market that didn’t require such robust, traditional full eDiscovery services for every matter. Tech-savvy corporations and firms with smaller matters were delighted to have the option to do the work themselves. Over time there have been multiple instances in which a small matter scales unexpectedly and must be dealt with quickly, in an all hands on deck approach, to meet the necessary deadlines. In these instances, it’s beneficial to have the ability to utilize a full-service team. When these situations arise it’s critical to have clean handoffs and ensure a database will transfer well.Moreover, we have seen major strides in the self-service, spectra space regarding the capabilities of data size thresholds. self-service, spectra options can now handle multiple terabytes, so it’s not just a “small matter” solution anymore. This gives internal teams incredible leverage and accessibility not previously experienced.self-service, spectra considerations and recommendationsIt’s important to understand the instances in which a company should utilize a self-service, spectra model or solution. Thus, I recommend laying out a protocol. Put a process in place ahead of time so that the next small internal investigation that gets too large too quickly has an action plan that gets to the best solution fast. Before doing this, it’s important to understand your team’s capabilities. How many people are on your team? What are their roles? Where are their strengths? What is their collective bandwidth? Are you staffed for 24/7 support or second requests or are you not?Next, it’s time to evaluate what part of the process is most beneficial to outsource. Who do you call for any eDiscovery related need? Do you have a current service provider? If so, are they doing a good job? Are they giving you a one-size-fits-all solution (small or large), or are they meeting you where you are and acting as a true partner? Are they going the extra mile to customize that process for you? It’s important to continually audit service providers.Think back to past examples. How prepared has your team and/or service provider been in various scenarios? For instance, if an investigation is turning into a government investigation, do you want your team pushing the buttons and becoming an expert witness, or do you have a neutral third party to hand that responsibility off to?After the evaluation portion, it’s time to memorialize the process through a playbook, so that everyone has clear guidelines regardless of which litigator or paralegal internally is working on the case. What could sometimes be a complicated situation can be broken down into simple rules. If you have a current protocol or playbook, ensure your team understands it. Outline various circumstances when the team would utilize self service or full service, so everyone is on the same page.For more on this topic, check out the interview on the Law & Candor podcast on scaling your eDiscovery program from self service to full service. ediscovery-reviewcloud, self-service, spectra, cloud-services, blog, ediscovery-review,cloud; self-service, spectra; cloud-services; bloglighthouse
eDiscovery and Review
Blog

Getting on the Same Page…of the Dictionary

Have you ever had this scenario – multiple team members from different groups come to you frustrated because the working relationship between their groups is “broken?” Legal is saying they aren’t getting what they need, IT says they are providing what’s asked, and finance doesn’t understand why we are paying our outside vendor for something that the internal IT and legal teams are “supposed to do.” You are responsible for process improvement among these groups so the questions and frustration lands on your desk! This is a common issue. So common, in fact, that this was a big part of a recent Legal Operators webinar I attended. The good news is that the solution may be simple.Often times, the issue revolves around language and how different departments are using the words differently. Let’s explore the above scenario a bit further. The legal team member says they asked IT to gather all data from a certain “custodian.” The IT team took that to mean all “user-created data” on the network from one certain employee, so that is what they provided. They didn’t, however, gather the items on the person’s desktop nor did they gather records that the person created in third-party systems such as the HR and sales systems that the company uses. The legal team, therefore, asked the outside vendor to collect the “missing” data and that vendor sent a bill for their services. Finance is now wondering why we are paying for collecting data when we have an IT team that does that. The issue is that different teams have slightly different interpretations of the request. Although this scenario is eDiscovery specific, this can happen in any interaction between departments. As legal operations is often responsible for process improvement as well as the way legal functions with other departments, the professionals in that group find themselves trying to navigate the terminology. To prevent such misunderstandings in the future, you can proactively solve this problem through a dictionary.Creating a dictionary can be really simple. It is something I have seen one person start on their own just by jotting down words they hear from different groups. From there, you can share that document and ask people to add to it. If you already have a dictionary of your company acronyms, you can either add to it or you can create a specific “data dictionary” for the purposes of legal and IT working together. Another option is to create a simple word document for a single use at the outset of a project. Which solution you select will vary based on the need you are trying to solve. Here are some considerations when you are building out your dictionary.What is the goal of the data dictionary? Most commonly I have seen the goal to be to improve the working relationship of specific teams long term. However, you may have a specific project (e.g., creation of a data map or implementation of Microsoft 365) that would benefit from a project-specific dictionary.Where should it live? This will depend on the goal, but make sure you choose a system that is easy to access for everyone and that doesn’t have a high administrative burden. Choosing a system that the teams are using for other purposes in their daily work will increase the chances of people leveraging this dictionary.Who will keep it updated? This is ideally a group effort with one accountable person who will make any final decisions on the definitions and own updating in the future. There will be an initial effort to populate many terms and you may want a committee of 2 or 3 people to edit definitions. After this initial effort, you can allow access to everyone to edit the document or you can have representatives from each team. The former allows the document to be a living, breathing document and encourages updating, however, may require more frequent oversight by the master administrator. The latter allows each group to have its own oversight but increases the burden of updating. Whichever method you choose, the ultimate owner of the dictionary should review it quarterly to ensure it is staying up to date.Who will have access? I recommend broader access over more limited access, especially for the main groups involved. The more people understand each other’s vocabulary, the easier it is for teams to work together. However, you should consider your company’s access policies when making this decision.What should it include? All department-specific business terms. It is often hard to remember what vernacular in your department is specific to your department as you are so steeped in that language. One easy way to identify these terms is to assign a “listener” from another department in each cross-functional meeting you have for a period. For example, for the next 3 weeks, in each meeting that involves another department, ask one person from that other department to write down any words they hear that are not commonly used in their department. This will give you a good starting point for the dictionary.Note that. although I am talking about a cross-functional effort in the above, this dictionary can also be leveraged within a department. I have found it very effective to create a legal ops dictionary that includes terms from all other departments that you pick up in your work with those other departments. This can still help your goal of resolving confusion and will allow you to get to a common understanding quickly as you are then better equipped with the language that will make your ask clear to the other team.legal-operationsediscovery-process, legal-ops, blog, legal-operations,ediscovery-process; legal-ops; bloglighthouse
Legal Operations
Blog

Achieving Information Governance through a Transformative Cloud Migration

Recently, I had the pleasure of appearing as a guest on Season 5, Episode 1 of the Law & Candor podcast, hosted by Lighthouse’s Rob Hellewell and Bill Mariano. The three of us discussed cloud migrations and how that process can provide a real opportunity for an organization to transform its approach to information governance. Below is a summary of our conversation, including best practices for organizations that are ready to take on this digital and cultural cloud transformation process.Because it is difficult to wrap your head around the idea of a cloud transformation, it can be helpful to visualize the individual processes involved on a much smaller scale. Imagine you are simply preparing to upgrade to a new computer. Over the years, you have developed bad habits around how you store data on your old computer, in part because the tools on that computer have become outdated. Now that you’re upgrading, you have the opportunity to evaluate your old stored data to identify what is worth moving to your new computer. You also have the opportunity to re-evaluate your data storage practice as a whole and come up with a more efficient plan that utilizes the advanced tools on your new computer. Similarly, the cloud migration process is the best opportunity an organization has to reassess what data should be migrated, how employees interact with that data, and how that data flows through the organization before building a brand new paradigm in the Cloud.You can think of this new paradigm as the organization’s information architecture. Just like a physical architecture where the architect designs a physical space for things, an organization’s information architecture is the infrastructure wherein the organization’s data will reside. To create this architecture effectively, you first must analyze how data flows throughout the company. To visualize this process, imagine the flow of information as a content pipeline: you’ve got a pile of papers and files on your desk that you want to assess, retain what is useful to you, and then pass on to the next person down the pipe. First, you would identify the files you no longer need and discard those. Next, you would identify what files you need for your work and put those aside for yourself. Then you would pass the remaining pile down to the next person in the pipeline, who has a different role in the organization (say, accountant). The accountant will pull out the files that are relevant to their accounting work, and pass the files down to the next person (say, a lawyer). The lawyer performs the same exercise for files that are relevant to their legal role, and so on until all the files have a “home.”In this way, information architecture is about clearly defining roles (accounting role, legal role, etc.) and how those roles interact with data, so that there is a place in the pipeline for the data they utilize. This allows information to flow down the pipeline and end up where it belongs. Note how different this system is from the old information governance model, where organizations would try to classify information by what it was in order to determine where it should be stored. In this new paradigm, we try to classify information by how it is used – because the same piece of content can be used in multiple ways (a vendor contract, for example, can be useful to both legal and accountant roles). The trick to structuring this new architecture is to place data where it is the most useful. Going hand-in-hand with the creation of a new information architecture, cloud migrations can (and should) also be an opportunity for a business culture transformation. Employees may have to re-wire themselves to work within this new digital environment and change the way they interact with data. This cultural transformation can be kicked off by gathering all the key players together and having a conversation about how each currently interacts with data. I often recommend conducting a multi-day workshop where every stakeholder shares what data they use, how they use it, and how they store it. For example, an accountant may explain that when he works on a vendor contract, he pulls the financial information from it and saves it under a different title in a specific location. A lawyer then may explain that when she works on the same vendor contract, she reviews and edits the contract language, and saves it under a different title to a different location. This collaborative conversation is necessary because, without it, no one in the organization would be able to see the full picture of how information moves through the organization. But equally important, what emerges from this kind of workshop is the seeds of culture transformation: a greater awareness from every individual about the role they play in the overall flow of information throughout the company and the importance of their role in the information governance of the organization. Best Practices for Organizations: Involve someone from every relevant role in the organization in the transformation process (i.e. everyone who interacts with data). If you involve frontline workers, the entire organization can embrace the idea that the cloud migration process will be a complete business culture transformation.Once all key players are involved, begin the conversation about how each role interacts with data. This step is key not only for the business cultural transformation, but also for the organization to understand the importance of doing the architecture work.These best practices can help organizations leverage their cloud migration process to achieve an efficient and effective information governance program. To discuss this topic further, please feel free to reach out to me at JHolliday@lighthouseglobal.com. information-governancemicrosoft-365, legal-operationscloud; information-governance; cloud-migration; bloglighthouse
Information Governance
Blog

Trends Analysis: New Sources of Evidentiary Data in Employment Disputes

Below is a copy of a featured article written by Denisa Luchian for The Lawyer.com that features Lighthouse's John Shaw.A highlight of the challenges arising from the increased use of collaboration and messaging tools by employees in remote-work environments.Our “top trends” series was born out of a desire to help in-house lawyers with their horizon scanning and with assessing the potential risks heading their way. Each post focuses on a specific area, providing companies and their lawyers with quick summaries of some of the challenges heading their way.Our latest piece in the series looks at the top 3 trends in-house lawyers should take notice of in the area of employment disputes, and was carefully curated by one of our experts – Lighthouse director of business development John Shaw. The Covid-19 pandemic has affected every sector of law and litigation, and employment law is certainly no exception. From navigating an ever-changing web of COVID-19 compensation regulations, to ensuring workplaces are compliant with shifting government health guidelines – the last six months have been chaotic for most employers. But as we all begin to regain our footing in this “new normal”, there is another COVID-19-related challenge that employers should be wary of: the increased use of collaboration and messaging tools by employees in remote-work environments.This past spring, cloud-based collaboration tools like Slack and Microsoft’s Teams reported record levels of utilisation as companies around the world were forced to jettison physical offices to keep employees safe and comply with government advice. Collaboration tools can be critical assets to keep businesses running in a remote work environment but employers should be aware of the risks and challenges the data generated from these sources can pose from an employment and compliance perspective.Intermingling of personal and work-related data over chatAs most everyone has noticed by now, working remotely during a pandemic can blur the line between “work life” and “home life.” Employees may be replying to work chat messages on their phone while simultaneously supervising their child’s remote classroom, or participating in a video conference while their dog chases the postman in the background. Collaboration and chat messaging tools can blur this line even further. Use of chat messaging tools is at an all-time high as employees who lost the ability to catch up with co-workers at the office coffee station transition these types of casual conversation to work-based messaging tools. These tools also make it easy for employees to casually share non-work related pictures, gifs, and memes with co-workers directly from their mobile phone.The blurring line between home and work, as well as the increased use of work chat messaging can also lead to the adoption of more casual written language among employees. Most chat and collaboration tools have emojis built into their functionality, which only furthers this tendency. Without the benefit of facial expressions and social cues, interpretation of this more casual written communication style can vary greatly depending on age, context, or culture.All of this means that personal, non-work related conversations with a higher potential for misinterpretation or dispute are now being generated over employer-sanctioned tools and possibly retained by the company for years, becoming a part of the company’s digital footprint.Evidence gathering challengesEmployers should expect that much of the data and evidence needed in future employment disputes and investigations may originate from these new types of data sources. Searching for and collecting data from cloud-based collaboration tools can be a more complicated process than traditional searching of an employee’s email or laptop. Moreover, the actual evidence employers will be searching for may look different when coming from these data sources and require additional steps to make it reviewable. Rather than using search terms to examine an employee’s email for evidence of bad intent, employers may now be examining the employee’s emoji use or reactions to chat comments on Teams or Slack.Evidence for wage and hour disputes may also look a bit different in a completely remote environment. When employees report to a physical office, employers can traditionally look to data from building security or log-in/out times from office-based systems to verify the hours an employee worked. In a remote environment, gathering this type of evidence may be a bit more complex and involve collecting audit logs and data from a variety of different platforms and systems, including collaboration and chat tools. A company’s IT team or eDiscovery vendor will need to understand the underlying architecture of these tools and ensure they have the capacity to search, collect, and understand the data generated from them.Employer best practicesEmployers should consider implementing an employee policy around the use of collaboration tools and chat functionality, as well as a comprehensive data retention schedule that accounts for the data generated from these tools. Keep these plans updated and adjust as needed. Ensure IT teams or vendors know where data generated by employees from these new data sources is stored, and that they have the ability to access, search, and collect that data in the event of an employment dispute.chat-and-collaboration-data; microsoft-365microsoft, cloud, emerging-data-sources, blog, chat-and-collaboration-data, microsoft-365microsoft; cloud; emerging-data-sources; blogthe lawyer
Chat and Collaboration Data
Microsoft 365
Blog

Worldwide Data Privacy Update

It was a tumultuous summer in the world of data privacy, so I wanted to keep legal and compliance teams updated on changes that may affect your business in the coming months. Below is a recap of important data privacy changes across multiple jurisdictions, as well as where to go to dive into these updates a little deeper. Keep in mind that some of these changes may mean heightened responsibilities for companies related to breach requirements and/or data subject rights.U.S. On September 17th, four U.S. Republican senators introduced the “Setting an American Framework to Ensure Data Access, Transparency, and Accountability Act” (SAFE DATA). The Act is intended to provide Americans “with more choice and control over their data and direct businesses to be more transparent and accountable for their data practices.” The Act contains data privacy elements that are reminiscent of the GDPR and California Consumer Privacy Act (CCPA) of 2018, including requiring tech companies to provide users with notice of privacy policies, giving consumers the ability to opt in and out of the collection of personal information, and requiring businesses to allow consumers the ability to access, correct, or delete their personal data. See the press release issued by the U.S. Senate Committee on Commerce, Science and Transportation here: https://www.commerce.senate.gov/2020/9/wicker-thune-fischer-blackburn-introduce-consumer-data-privacy-legislationCalifornia’s Proposition 24 (the “California Privacy Rights Act of 2020”) will be on the state ballot this November. In some ways, the Act expands upon the CCPA by creating a California Privacy Protection Agency and tripling fines for collecting and selling children’s private information. Proponents say it will enhance data privacy rights for California citizens and give them more control over their own data. Opponents are concerned that it will result in a “pay for privacy” scheme, where large corporations can downgrade services unless consumers pay a fee to protect their own personal data. See: https://www.sos.ca.gov/elections/ballot-measures/qualified-ballot-measures for access to the proposed Act.In mid-August, the Virginia Legislative Commission initiated study commissions to begin evaluating elements of the proposed Virginia Privacy Act, which would impose similar data privacy responsibilities on companies operating within Virginia as the GDPR does for those in Europe and the CCPA does for those in California. To access the proposed Act, see: https://lis.virginia.gov/cgi-bin/legp604.exe?201+sum+HB473.EuropeOn September 8, Switzerland’s Federal Data Protection and Information Commissioner (FDPIC) concluded that the Swiss-US Privacy Shield does not provide an adequate level of protection for data transfers from Switzerland to the US. The statement came via a position paper issued after the Commissioner’s annual assessment of the Swiss-US Privacy shield regime, and was based on the Court of Justice of the European Union (CJEU) invalidation of the EU-US Privacy Shield. You can find more about the FDPIC position paper here: https://www.edoeb.admin.ch/edoeb/de/home/kurzmeldungen/nsb_mm.msg-id-80318.htmlSimilarly, Ireland’s data protection commissioner issued a preliminary order to Facebook to stop sending data transfers from EU users to the U.S., based on the CJEU’s language in the Schrems II decision which invalidated the EU-US Privacy Shield. In response, Facebook has threatened to halt Facebook and Instagram services in the EU. Check out the Wall Street Journal’s reporting on the preliminary order issued by the Ireland Data Protection Commission here: https://www.wsj.com/articles/ireland-to-order-facebook-to-stop-sending-user-data-to-u-s-11599671980. For Facebook’s response filing in Ireland, see: https://www.dropbox.com/s/yngcdv99irbm5sr/Facebook%20DPC%20filing%20Sept%202020-rotated.pdf?dl=0Relatedly, in wake of the Schrems II judgment, the European Data Protection Board has also created a task force to look into 101 complaints filed with several data controllers in EEA member states related to Google/Facebook transfers of personal data into the United States. See the EDPB’s statement here: https://edpb.europa.eu/news/news/2020/european-data-protection-board-thirty-seventh-plenary-session-guidelines-controller_enBrazilIn September, the new Brazilian General Data Protection Law (Lei Geral de Proteção de Dados Pessoais or LGPD) became retroactively effective after the end of a 15-business-day period imposed by the Brazilian Constitution. This was a surprising turn of events after the Brazilian Senate rejected a temporary provisional measure on August 26th that would have delayed the effective date to the summer of 2021. Companies should be aware that the law is similar to the GDPR in that it is extra territorial and bestows enhanced privacy rights to individuals (including right to access and right to know). Be aware too, although administrative enforcement will not begin until August of 2021, Brazilian citizens now have a private right of action against organizations that violate data subjects’ privacy rights under the new law. For more information, check out the LGPD site (that can be translated via Google Chrome) with helpful guides and tips, as well as links to the original law: https://www.lgpdbrasil.com.br/. The National Law Review also has a good overview of the sequence of events that led up to this change here: https://www.natlawreview.com/article/brazil-s-data-protection-law-will-be-effective-after-all-enforcement-provisions.EgyptIn June, Egypt passed the Egyptian Data Protection Law (DPL), which is the first law of its kind in that country and aims to protect the personal data of Egyptian citizens and EU citizens in Egypt. The law prohibits businesses from collecting, processing, or disclosing personal information without permission from the data subject. It also prohibits the transfer of personal data to a foreign country without a license from Egypt. See the International Association of Privacy Professional’s reporting on the law here: https://iapp.org/news/a/egypt-passes-first-data-protection-law/To discuss this topic further, please feel free to reach out to me at SMoran@lighthouseglobal.com.data-privacyccpa, gdpr, data-privacy, blog, data-privacy,ccpa; gdpr; data-privacy; blogsarah moran
Data Privacy
Blog

Cloud Based Collaboration Tools are not Just Desirable, but Necessary for Keeping Workforces Productive

Below is a copy of a featured article written by Denisa Luchian for The Lawyer.com, where she interviews Lighthouse's Matt Bicknell. Lighthouse business development director EMEA Matt Bicknell talks to The Lawyer about how in today’s remote environment, cloud based collaboration tools are not just desirable but a necessity – but also the challenges they pose for eDiscovery processes.What is the driving force behind the massive migration to cloud-based environments over the last few years?There are a few factors at play here. Prior to the Covid-19 pandemic, companies were already moving their data to the Cloud (both public and private) in droves, in order to take advantage of unlimited data capacities and drastically lower IT overhead. The move to the Cloud is also being driven by a younger workforce that feels at home working with cloud-based chat and collaboration tools, like M365 or G-Suite. However, the worldwide shift to remote work due to the pandemic really broke the dam when it comes to cloud migration. We’ve seen a seismic shift to cloud-based tools and environments since March of 2020. In a completely remote environment, cloud-based collaboration tools are not just desirable, they are necessary to keep workforces productive. Migrating to the Cloud can greatly reduce the need for workers to be physically present in an office building.What are some of the challenges that cloud migration can pose to the eDiscovery process?Unlimited storage capacity at low cost can be a great thing for an organisation’s bottom line, but can definitely cause issues when it comes time to find and collect data that is needed for a litigation or investigation. Search functions built for cloud-based tools are often built for business use, rather than for the functionality that legal and compliance teams require in order to find relevant information. In addition, collecting and producing from collaboration tools like Teams or Slack can be much more complicated than a traditional email collection. Relevant communications that previously would have happened over email now happen over chat, through emoticon reactions, or through collaboratively editing a document. All of this relevant data may be stored in several different places, in a variety of formats within the Cloud. Even attachments are handled differently in cloud-based applications – instead of sending a static document as an attachment via email, Teams defaults to sending a link to the document in Teams. This means that the document could look significantly different at the time of collection than it did when the link was sent. Collecting from those types of sources, producing them in a format that makes sense to a reviewer/opposing counsel, and accounting for all the dynamic variables can be a difficult hurdle to overcome if the organisation hasn’t planned for it.How can companies prepare for eDiscovery challenges in a cloud environment?First, make sure compliance, legal and IT all have a seat at the table and have input into decisions that may affect their workflows and processes. Understand where your data resides and have effective retention, data governance, and compliance policies in place. Your policies should spell out which cloud-based applications employees may use and also have rules in place regarding how they can be used and where work product should be stored. Understand your legal hold policy and what type of data it encompasses. Make sure you have the right talent (either within your organisation or through a vendor) who understands the underlying architecture behind Teams, G-Suite, or any other cloud-based tool your organisation uses and also knows how to collect relevant information when needed. Ensure that your IT team or vendor has a system in place to monitor application and system updates. Cloud-based updates can roll out on a weekly basis; those changes may significantly impact the efficacy of your data retention and collection policies and workflows.As cloud technology continues to evolve, what does the future hold for eDiscovery? Because of the near endless storage capacity of the Cloud, the amount of data companies generate will just continue to exponentially expand. As a result, the technology behind AI and analytics will continue to improve, and those tools will eventually be less of an option to use in certain matters and more of a necessity to use for most matters. I also think as more companies feel comfortable moving their data to the Cloud, we will start to see more and more of these companies bring their eDiscovery programs in house. Vendors are already beginning to offer subscription-based, self-service, spectra eDiscovery programs which hand over the eDiscovery reigns to the organisation, while the vendor stores and manages the data in the Cloud (both public and private). This type of service allows companies to eliminate the middleman, control their own eDiscovery costs, and easily scale up or down to meet their own needs, while leaving the burden of data storage security and maintenance with the vendor. Finally, look for vendors to start offering subscription-based services to help organisations manage the near-constant stream of application and system updates for cloud-based services.microsoft-365; chat-and-collaboration-data; information-governancemicrosoft, cloud, g-suite, blog, microsoft-365, chat-and-collaboration-data, information-governance,microsoft; cloud; g-suite; blogthe lawyer
Microsoft 365
Chat and Collaboration Data
Information Governance
Blog

Automation of In-House Legal Tasks: How and Where to Begin

Legal operations departments aim to support the delivery of legal services in an efficient manner. To that end, resource management and solving problems through technology are core responsibilities of the department. But, the tasks of a legal department vary from answering legal phone calls, filing patents, reviewing and approving contracts, and litigating, just to name a few. With such a varied workload, what to automate can be difficult to identify. To help, I have put together a brief overview of where to start.Step 1: IdentificationStart by identifying the tasks that are repetitive. One of the best ways I have found to do this is to set up a quick 15-minute discussion with 3-5 representatives from different functional areas of your legal team, and from different levels (e.g. individual contributor, manager, function head). In that meeting, ask them one or all of the following questions:What tasks do you wish your team no longer had to do?What tasks do you want to be replaced by robots in the future?What tasks are low value but your team still spends a lot of time on?You should not spend too much time here – the goal is to identify a pretty quick list that is top of mind for people. From these interviews, create a list for further vetting. Just in case you come up empty handed or aren’t able to get time with people within legal, here is a list of items that are commonly automated and we would expect to come up:Contract Automation Self service retrieval of boilerplate contracts (e.g., NDAs) Self service building common contracts (e.g., clause selection for vendor contracts, developer agreements)Request for review, negotiation, and signature of other contractsLegal Team Approvals Marketing document approvals Budget approval for any legal team spend Legal Assistance Requests (Intake) Legal research request Legal advice on an issue neededNeed for outside counselPatent Management Alerts for filing and renewal deadlines Automatically manage workflow for submissionsSelect one or two items from your list and then validate it with your boss and/or general counsel. You want to understand whether others agree on the impact automation will make and identify any potential concerns.Step 2: Build vs. BuyWhether to purchase third-party software or build your own internally is always a good question to start with. Building your own tool gives you exactly what you want with, oftentimes, very little need to change your process. But, it is more resource-intensive both for the build and the maintenance. Buying off the shelf software limits you in what’s commercially available but it takes all the load off your development resources.For some, build or buy may be an easy question as they may not have access to development resources. For others, they may not have any budget for an external tool and/or may be required to use internal teams. For most, however, they fall in the middle and have some access to resources and some budget (but usually not enough of either – that’s a whole other topic).If you fall into this latter category, you will have to analyze your options. Your organizational culture will dictate what depth of analysis is needed. Regardless of the level of detail, the process is the same. The easiest place to start is by surveying what is commercially available. Even if you decide to build, knowing what software is out there, what features are available, and the general costs is helpful. Next, it is helpful to get an approximate cost of the build and maintenance if done internally. This can be a rough order of magnitude based on estimates from other internal tools developed or can be a more detailed estimate developed with the engineering team. Once you have the costs, you will want to add some information about the pros and cons of each solution – e.g., time to build and implement, technology dependencies (if known), other considerations (e.g., we are moving to the cloud in 6 months and we don’t know impact). Once you have this analysis, you can put forth a recommendation to your boss and whomever else is required to decide on how to proceed.Step 3: DesignNow that you have a decision, you can move on to design. This is the most critical stage as this is where you are determining exactly what results your automation will produce. The first thing to do here is to map out your current internal process including who does what. You want to make sure you have a representative of each group take a look at the process diagram and validate it.Once you have the process in place, you’re ready to work with the development team. If you are buying a solution for automation, you should be working closely with the software provider’s onboarding team to overlay your current process with the capabilities of the software. You will want to note where the software does not support your process and where changes will need to be made. If you adjust your process, be sure to involve the same representatives that helped with the initial diagram to provide feedback on any proposed changes in the process.If you are building the solution, you will meet with your internal product resource. This person (or people) will want to understand the process diagram and may even want to watch people go through the process so they can understand user behavior. They will then likely convert your diagram into user stories that developers will develop against. Make sure to be as specific as possible in this process. This resource will be the one representing your voice with the developers so you want them to really understand the nuances of the process.Expect some iteration back and forth during this stage and although I have simplified it here, this will be a long stage and the most important.Step 4: ImplementationThe final stage of the process is implementation. Start with a pilot of the automation. Either select a small use case or a small group of users and validate that your automation functions as planned. During this pilot project, it is really helpful to have resources from your software providers or from the development team readily available to make changes and help answer questions. During this pilot, you should also keep track of how the automation is performing versus your expectations. For example, if you expected it to save time, create a way to track the time it saves and report on that metric.After a successful pilot and necessary refinement, you can move on to your full rollout. Create a plan that includes the deployment of the technology, training, feedback, and adjustment. Make sure to also identify the longer-term maintenance strategy that includes continuing to gather feedback and ways to improve the automation over time.There are lots of great publications that go into further detail about each of the steps above, but hopefully this points you in the right direction. Once deployed, automation can be a very powerful tool that augments your team without adding additional FTEs.To discuss this topic more, please feel free to reach out to me at DJones@lighthouseglobal.com.legal-operations; ai-and-analyticslegal-ops, blog, legal-operations, ai-and-analyticslegal-ops; bloglighthouse
Legal Operations
AI and Analytics
Blog

Now Live! Season Five of Law & Candor

We are thrilled to announce the one-year anniversary of our Law & Candor podcast. One year, five seasons, and 30 episodes later, we are still here and wholly devoted to pursuing the legal technology revolution. Click the image to listen to season five now or scroll down for more details. Co-hosts Bill Mariano and Rob Hellewell are back for season five of Law & Candor with six easily digestible episodes that cover a range of hot topics from cloud migrations to managing DSARs. This dynamic duo, alongside industry experts, discuss the latest topics and trends within the eDiscovery, compliance, and information governance space as well as share key tips for you and your team to take away. Check out the latest season's line-up below:Achieving Information Governance Through a Transformative Cloud Migration Scaling Your eDiscovery Program: Self Service to Full Service Leveraging AI and Analytics to Detect PrivilegeEffective Strategies for Managing DSARsFacilitating a Smooth and Successful Large Review Project with Advanced AnalyticsTop Microsoft 365 Features to Leverage in Your eDiscovery ProgramEpisodes are created to be short and bingeable so that you can listen on the platform of your choice with ease. Check them out now or bookmark them to listen to later. Follow Law & Candor on Twitter to get the latest updates and join the conversation.Catch up on past seasons by clicking the links below:Season 1Season 2Season 3Season 4Special Edition: Impacts of COVID-19For questions regarding this podcast and its content, please reach out to us at info@lighthouseglobal.com.ediscovery-reviewcloud, information-governance, ai-big-data, blog, ediscovery-review,cloud; information-governance; ai-big-data; bloglighthouse
eDiscovery and Review
No items found. Please try different search parameters.