Workplace Surveillance Ethics

Explore top LinkedIn content from expert professionals.

  • View profile for Clara Hawking
    32,225 followers

    Microsoft AI Teams will soon tell your boss where you are. Starting December 2025, Teams can automatically detect when you connect to your company’s Wi-Fi and update your location to “in the office.” It sounds like a small feature. It isn’t. Location tracking through workplace networks is the newest frontier in digital surveillance, and it’s coming through your collaboration software. Microsoft says the feature is opt-in. That is very good. But, that decision will rest largely with employers and admins, not the average employee trying to meet deadlines. If you work for a Microsoft-using organization, now is the time to ask: Is our company planning to activate this feature? Has consent been properly documented? If you represent a union, this deserves to be on your next agenda. The GDPR and UK Data Protection Act require transparency, necessity, and proportionality for any location tracking. Under the EU AI Act, this may also fall under high-risk processing of biometric and personal data for workplace management. Employers must conduct a fundamental rights impact assessment before rolling it out. This isn’t paranoia. It is risk management, employee rights, and compliance. Workplace tracking without explicit, informed consent can violate privacy law in multiple jurisdictions, and it may open employers to liability under both GDPR and the EU AI Act’s risk provisions. If your organization uses Microsoft Teams with minors, such as schools or training programs, the stakes are even higher. Here’s what to do as an employee, parent, or guardian: 🔹 Ask your IT administrator if “location autodetection” is enabled. 🔹 Request a copy of the company’s Data Protection Impact Assessment (DPIA). 🔹 Ensure opt-in consent is voluntary and revocable. 🔹 Check that logs are deleted regularly and not used for performance evaluation. Transparency is not optional. #DigitalSovereignty #WorkplacePrivacy #AICompliance #GDPR #MicrosoftTeams Image source: SlashGear, https://lnkd.in/di5WvY2e From Microsoft: Microsoft 365 Roadmap: https://lnkd.in/dYc3N9TX Microsoft Learn (Configure auto-detect of work location): https://lnkd.in/dtEkYNqB

  • View profile for Vinu Varghese

    MS Organizational Psychology | Chartered MCIPD | GPHR® | SHRM-SCP® | Lean Six Sigma Green Belt

    7,672 followers

    𝗧𝗵𝗲 𝗦𝘂𝗿𝘃𝗲𝗶𝗹𝗹𝗮𝗻𝗰𝗲 𝗧𝗿𝗮𝗽: 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 𝗕𝗼𝗼𝘀𝘁𝘀 𝗩𝗶𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆, 𝗲𝗿𝗼𝗱𝗲𝘀 𝘁𝗿𝘂𝘀𝘁. Over the past few months, more companies have quietly rolled out new monitoring systems — tracking mouse movements, keystrokes, websites, “idle time,” and even screenshots. 𝗧𝗵𝗲 𝗶𝗻𝘁𝗲𝗻𝘁? Improve productivity, tighten accountability, optimise workflows. 𝗧𝗵𝗲 𝗼𝘂𝘁𝗰𝗼𝗺𝗲? A workplace culture that feels more watched than supported. Here’s the paradox leaders are missing: 𝙈𝙤𝙣𝙞𝙩𝙤𝙧𝙞𝙣𝙜 𝙗𝙤𝙤𝙨𝙩𝙨 𝙫𝙞𝙨𝙞𝙗𝙞𝙡𝙞𝙩𝙮 — 𝙣𝙤𝙩 𝙩𝙧𝙪𝙨𝙩. Employees may be online longer, but they’re not necessarily more engaged. Surveillance signals a lack of confidence, and people respond by doing only what gets measured. 𝙏𝙧𝙖𝙘𝙠𝙞𝙣𝙜 𝙖𝙘𝙩𝙞𝙫𝙞𝙩𝙮 𝙙𝙤𝙚𝙨 𝙣𝙤𝙩 𝙣𝙚𝙘𝙚𝙨𝙨𝙖𝙧𝙞𝙡𝙮 𝙢𝙚𝙖𝙣 𝙩𝙧𝙖𝙘𝙠𝙞𝙣𝙜 𝙞𝙢𝙥𝙖𝙘𝙩. A green dot on Teams does not equal performance. When companies measure time-at-keyboard more than outcomes, employees shift from value-creation to “visibility theatre.” 𝙏𝙝𝙚 𝙚𝙢𝙤𝙩𝙞𝙤𝙣𝙖𝙡 𝙘𝙤𝙨𝙩 𝙞𝙨 𝙧𝙚𝙖𝙡. Workers report: • feeling micromanaged • reduced autonomy • lower morale • rising anxiety and distrust Ironically, the very tools meant to improve productivity may be undermining it. Modern work isn’t defined by minutes of activity — it’s defined by: • problem-solving • creativity • judgment • ownership • outcomes These can’t be captured by keystroke logs. 𝗧𝗵𝗲 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝘁𝗵𝗮𝘁 𝘄𝗶𝗹𝗹 𝘄𝗶𝗻 𝗮𝗿𝗲𝗻’𝘁 𝘁𝗵𝗲 𝗼𝗻𝗲𝘀 𝘁𝗿𝗮𝗰𝗸𝗶𝗻𝗴 𝗲𝗺𝗽𝗹𝗼𝘆𝗲𝗲𝘀… 𝗧𝗵𝗲𝘆’𝗿𝗲 𝘁𝗵𝗲 𝗼𝗻𝗲𝘀 𝗲𝗺𝗽𝗼𝘄𝗲𝗿𝗶𝗻𝗴 𝘁𝗵𝗲𝗺.

  • View profile for Roberto Ferraro
    Roberto Ferraro Roberto Ferraro is an Influencer

    Grow and learn with me: personal development, leadership, innovation. I am a project leader, coach, and visual creator, and I share all I learn through my posts and newsletter.

    110,194 followers

    The dark side of employee monitoring: trust, value, and agency 🕵🏻♂️🚫 🤔 A study found that 80 percent of top US employers use tech to track workers' productivity, often in real-time. Does our company monitor our fellow workers and us with high-tech software? Do we even know? ➡️ The missed side of value Employee monitoring encourages the mentality that the only valuable hours are those we spend in front of our computers; instead, we need to reframe what productivity is. ➡️ A trust issue "If we can't see our people, how do we know what they're doing?" Digital monitoring is an extreme form of micromanagement, a need for control resulting from a lack of trust that when people are not in the office, they are not "productive." ➡️ Monitoring can backfire Research suggests that employee monitoring can backfire, making people feel like they have no agency and increasing the prevalence of the behaviors these systems want to deter. ➡️ Rethinking knowledge work and value People may work hard to prove they are working instead of doing valuable work, constantly demonstrating their hard work. 🌱 So, how can we create cultures where people are trusted to manage their time and produce quality work? ➡️ The potential of people analytics If we can solve the trust and transparency issues, people analytics could help employees use their data to better understand and improve their work patterns. Illustration by me 😊 Extract from an article by Rachel Botsman. Link to the complete source in the first comment 👇 #productivity #trust #management

  • View profile for Paula Cipierre
    Paula Cipierre Paula Cipierre is an Influencer

    Global Head of Privacy | LL.M. IT Law | Certified Privacy (CIPP/E) and AI Governance Professional (AIGP)

    9,198 followers

    Can law help build ethical AI systems by design, or does ethics resist formalization? In earlier posts, I argued that ethics is about reasoned judgement under uncertainty, and that regulation can create clarity where organizations otherwise struggle. With today’s post I want to connect law and ethics to technical implementation; specifically, the role that law can play in facilitating ethical data practices by design. Privacy professionals are well familiar with this concept, as epitomized by Art. 25 GDPR which requires organizations to implement data protection by design and default. But as Prof. Christian Djeffal outlines in a recent article, law by design has since become a fixture of EU law: Law by design translates legal and ethical goals into technical and organizational obligations. At the same time, it deliberately leaves discretion as to implementation. ➡️ What law can do well Frameworks like the GDPR and the AI Act show how law can meaningfully support ethical data practices by design: ✅ They shape how organizations structure the lifecycle of data processing, starting with an initial assessment of the necessity and proportionality of processing. ✅ They require organizations to clearly define roles and responsibilities from the beginning, and document any relevant risks. ✅ They encourage organizations to seek diverse perspectives when developing and deploying new technologies, thus reflecting the inherently interdisciplinary nature of sociotechnical design. ➡️ What this means for ethical AI Ethics is no longer a nice-to-have when it is hardcoded into legal requirements. As I argued in my master's thesis, the AI Act, for instance, translates ethical obligations into technical requirements, specifically mandating: ✅ Respect for human autonomy by requiring human oversight of the development and deployment of AI systems. ✅ The prevention of harm through accuracy, robustness, and security. ✅ Fairness and explainability through robust data governance and record-keeping. ➡️ Where law reaches its limits At the same time, law by design does not resolve any dilemmas or trade-offs. Ethical behavior is not a technological fact, but the result of human deliberation. Procedure matters just as much as outcome, and legal requirements alone do not tell organizations how to weigh competing priorities in practice. ➡️ What this means for leaders on ethical AI Law by design is not a shortcut to ethical AI. But it can create the right incentives. Leaders should: ✅ Leverage law by design requirements as a foundation for responsible data processing. ✅ Facilitate ethical deliberation to translate law by design requirements into concrete deliverables. ✅ Open up the room for innovation by, in Djeffal's words, "prompting the development of solutions where none yet exist." Link to Djeffal's article: https://bit.ly/45Sj76P. #ResponsibleAI #AIGovernance #DataEthics #Leadership

  • View profile for John Hopkins, PhD
    John Hopkins, PhD John Hopkins, PhD is an Influencer

    LinkedIn Top Voice | Top 100 Future of Work Leader | Stanford’s Top 2% of Scientists List | Keynote Speaker | Dad

    18,073 followers

    🎙️ 🏡 One of the country’s top compliance training companies recorded the conversations of its employees by turning their laptops into covert listening devices while they were at home, in a case that tests the boundaries of workers’ privacy. Victorian police are investigating claims that Safetrac breached the state surveillance laws after chief executive Deborah Coram admitted in legal documents that her company recorded the audio and screens of select members of its staff, who work from home. The idea of recording workers’ conversations, let alone their conversations at home, is unusual. Given the habit of employees’ home life to leak into their work life during remote working, the risks are extraordinary. Unions are pushing for new laws to guard against unreasonable or excessive monitoring in the workplace and state Labor governments are considering urgent reforms to update outdated surveillance laws for the WFH era. State work health and safety laws are also starting to recognise that surveillance is a potential psychosocial hazard. For the layman, privacy has long been considered an individual right. You waive the rights to your data or you consent to workplace monitoring. But cases such as Safetrac show that, much like work health and safety laws, privacy can also be understood as a collective right. Privacy is relational. Surveillance can not only affect you but those around you, including family members, friends and other third parties. ❓ Is it ever okay to record the audio and screens of employees when they are working from home, or other locations outside of the traditional workplace? As always, keen to hear your thoughts, opinions and experiences. 🙏 Link to full AFR article available in the comments section below 👇 WorkFLEX-Australia Author: David Marin-Guzman The Australian Financial Review #wfh #employeesurveillance #futureofwork

  • View profile for Steven Claes

    CHRO | Introvert Advocate | Career Growth for Ambitious Introverts | HR Leadership Coach | Writer | Newsletter: The A+ Introvert (60% Open Rate)

    158,022 followers

    The most dangerous lie in business today:   'We need to monitor our people to ensure productivity.'   A friend CEO shared his 'productivity tracking' results with me.   The data was shocking when I saw it! Their most monitored team? → Highest turnover rate. → Zero innovation. → Lowest output.   And here's a controversial take ahead 🔥 (which I shared with him)   Every keystroke you track Every minute you monitor Every bathroom break you log...   You're not measuring productivity. You're documenting distrust. (a bit black or white, but still...)   So, what actually drives performance?   1/ Crystal Clear Expectations → Set measurable outcomes → No gray zones on deadlines → Define what winning looks like   2/ Trust by Default → Zero surveillance → Focus on deliverables → Celebrate achievements, not hours   3/ Adult Conversations → Quality check-ins → Address issues head-on → Solutions over surveillance   Companies still playing digital babysitter? They're losing the war for talent. (And their best people are already interviewing elsewhere)   The future belongs to companies that: ✓ Trust first ✓ Measure impact ✓ Enable autonomy   The harsh reality? Your turnover rate tells the real story. P.S. Later from that same CEO: "Deleted a lot of that monitoring. Our new productivity metric? Trust."   💭 Are you brave enough to lead with trust in your life?   — 👉 Share if you're committed to building better workplaces 🎯 Follow for more unfiltered leadership insights

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (91,000+ subscribers), Mother of 3

    126,746 followers

    🚨 [AI POLICY] Big! The U.S. Department of Labor published "AI and Worker Well-being: Principles and Best Practices for Developers and Employers," and it's a MUST-READ for everyone, especially ➡️ employers ⬅️. 8 key principles: 1️⃣ Centering Worker Empowerment "Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace." 2️⃣ Ethically Developing AI "AI systems should be designed, developed, and trained in a way that protects workers." 3️⃣ Establishing AI Governance and Human Oversight "Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace." 4️⃣ Ensuring Transparency in AI Use "Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace." 5️⃣ Protecting Labor and Employment Rights "AI systems should not violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and antiretaliation protections." 6️⃣ Using AI to Enable Workers "AI systems should assist, complement, and enable workers, and improve job quality." 7️⃣ Supporting Workers Impacted by AI "Employers should support or upskill workers during job transitions related to AI." 8️⃣ Ensuring Responsible Use of Worker Data "Workers’ data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly." ╰┈➤ This is an essential document, especially when AI development and deployment occur at an accelerated pace, including at the workplace, and not much is said regarding workers' rights and labor law. ╰┈➤ AI developers should have labor law and workers' rights in mind when building AI systems that will be used in the workplace. Additional guardrails might be required. ╰┈➤ Employers should be aware of their ethical and legal duties if they decide to use AI in the workplace. AI-powered systems are not "just another technology" and present specific risks that should be tackled before deployment, especially in the workplace. ➡️ Download the document below. 🏛️ STAY UP TO DATE. AI governance is moving fast: join 36,900+ people in 150+ countries who subscribe to my newsletter on AI policy, compliance & regulation (link below). #AI #AIGovernance #AIRegulation #AIPolicy #WorkersRights #LaborLaw

  • View profile for Asim Amin

    Founder & CEO at Plumm | Speaker | Advisor

    35,182 followers

    Remote work has created a new obsession: productivity tracking software that monitors keystrokes, tracks mouse movements, and measures "active time."   But most companies are measuring the wrong things.   Someone just solved their company's biggest client problem in 20 minutes of thinking. Then they went for a walk to clear their head and plan what comes next.   The productivity software flagged them as "unproductive." Meanwhile, a colleague spent eight hours clicking through spreadsheets, moving their mouse, and looking busy. The software thinks they're amazing.   Companies are measuring activity, not results. Motion, not progress. Hours logged, not problems solved.   Productivity isn't about being busy. It's about moving things forward.   The best remote workers know when to step away from the screen to think clearly. Their best ideas come during walks, conversations, or while doing something completely different.   But productivity software sees this as "inactive time."   If a company needs to track every keystroke to know if someone's working, they've either hired the wrong people or created the wrong culture.   Trust and results beat surveillance every time.   What's your experience with remote work, do these tracking tools actually help?

  • View profile for Rob Gilder

    Leadership Expert helping individuals and teams perform through Bulletproof Empathy™ – coaching, consulting & speaking for modern leaders

    4,386 followers

    You can’t monitor your way to a high-performance culture. If a team only performs when they are being watched, you don't have a culture; you have a surveillance state. And in the modern workplace, surveillance is the fastest way to kill the very innovation you’re trying to measure. Real leadership happens in the "shadows", it’s what your team does when the lights are off. It’s the difference between a team that ticks boxes because they have to, and a team that creates value because they want to. The rhetorical reality of the boardroom often misses this: 🟢 The Watcher’s Paradox: People don’t give their best when they’re watched; they give their best when they’re trusted. 🟢 The Safety Multiplier: When people feel safe, they perform better. It isn't a "soft" sentiment; it’s a biological performance requirement. 🟢 The Invisible Engine: Culture isn't found in your workspace or equipment. It’s found in the "smell" of your office, the rituals, the unprompted collaboration, and the way decisions are made when you aren't there to mediate. As leaders, we have to ask ourselves: Are our Structures and Processes designed to catch mistakes, or are they designed to foster authority and development? If you strip away the office décor and the employee handbook, what remains of your culture? If the answer is "silence," then the trust isn't there. High performance isn't forced through a lens; it’s unlocked through a sense of belonging and safety. Have you noticed a shift in output when you’ve stepped back and leaned into trust rather than tracking? Follow Rob Gilder for reflections on leadership, empowerment, and building healthy team cultures.

  • View profile for Richard Coleman MAICD

    Leading change in WHS and Sustainability

    7,416 followers

    So today we have another example of a business leader saying and doing something so unbelievably stupid in relation to WHS that my desk has an indentation where my head has been hitting it since reading the reporting in the AFR. A business called Safetrac turned on audio surveillance on the computers of its staff who were working from home…without clear policy, without telling them and definitely without anything approaching consultation. Apparently Safetrac deployed Teramind to monitor “underperformers,” enabling laptop microphones from mid-April to early June, and only expanded its four-sentence surveillance policy at the end of June. On 12 August, WorkCover agent Allianz accepted a mental-injury claim from a worker who developed anxiety after discovering the audio surveillance. Victoria Police is reportedly investigating. This is not a grey area of etiquette. It is a failure of process, consultation and risk management. In Victoria, employers must consult with employees and HSRs when identifying or assessing hazards, when deciding on risk controls, and when monitoring the health of employees and workplace conditions. Rolling out intrusive monitoring, especially audio capture, undoubtedly triggers those duties. Consultation isn’t a courtesy; it is a statutory requirement. But wait it gets worse…. Safetrac’s updated policy reportedly asserts that monitoring “in accordance with employment contracts, company policies, and relevant legislation are not considered psychosocial hazards.” I will gladly buy a decent bottle of wine for any of my contacts who can point to the law that allows CEO’s to arbitrarily define what is and what is not a hazard. Thankfully we live in a society where you can’t just do stuff to people and arbitrarily decide that what you’re doing is not evil, that what you’re proposing doesn’t have risks and that in your enlightened and lofty view people should be happy about your decisions. Psychosocial hazards are determined by the nature of work and its impacts, assessed through a risk process with worker consultation, not by policy wording. Attempting to define surveillance out of “hazard” status misses both the law and the science. If the AFR reporting is accurate, here’s what good governance should have required before any deployment: A formal psychosocial risk assessment with workers and HSRs, and clear, documented consultation. A proportionate purpose test (what problem are we solving?), and strict minimisation (no audio by default). Transparent, specific notices and informed consent—not a retrofit policy. Compliance isn’t about how cleverly you can write a policy after the fact. It’s about whether your decisions respect the law, your people, and the risks you create. On all three counts, this approach fails the test.

Explore categories