ðð®ðð® ð´ð¼ðð²ð¿ð»ð®ð»ð°ð² ð¶ð ð¼ð»ð² ð¼ð³ ððµð² ðºð¼ðð ðºð¶ððð»ð±ð²ð¿ððð¼ð¼ð± ðð¼ð½ð¶ð°ð ð¶ð» ð²ð»ðð²ð¿ð½ð¿ð¶ðð². Because most people explain it from the inside out: policies, councils, standards, stewardship. But the business does not buy any of that. The business buys outcomes: â trustworthy KPIs â vendor and partner data you can actually use â faster financial close â fewer reporting escalations â smoother M&A integration â AI you can deploy without creating risk debt Most AI programs fail for boring reasons: nobody owns the data, quality is unknown, access is messy, accountability is missing. ð¦ð¼ ð¹ð²ðâð ðð¶ðºð½ð¹ð¶ð³ð ð¶ð. ðð®ðð® ð´ð¼ðð²ð¿ð»ð®ð»ð°ð² ð¶ð ð³ð¼ðð¿ ððµð¶ð»ð´ð: â ownership â quality â access â accountability ðð»ð± ð¶ð ð¯ð²ð°ð¼ðºð²ð ðð²ð¿ð ð½ð¿ð®ð°ðð¶ð°ð®ð¹ ððµð²ð» ðð¼ð ððµð¶ð»ð¸ ð¶ð» ð° ð¹ð®ðð²ð¿ð: 1. Data Products (what the business consumes) â a named dataset with an owner and SLA â clear definitions + metric logic â documented inputs/outputs and intended use â discoverable in a catalog â versioned so changes donât break reporting 2. Data Management (how products stay reliable) â quality rules + monitoring (freshness, completeness, accuracy) â lineage (where it came from, where itâs used) â master/reference data alignment â metadata management (business + technical) â access controls and retention rules 3. Data Governance (who decides, who is accountable) â data ownership model (domain owners, stewards) â decision rights: who can change KPI definitions, thresholds, and sources â issue management: triage, escalation paths, resolution SLAs â policy enforcement: whatâs mandatory vs optional â risk and compliance alignment (auditability, approvals) 4. Data Operating Model (how you scale across the enterprise) â domain-based setup (data mesh or not, but clear domains) â operating cadence: weekly issue review, monthly KPI governance, quarterly standards â stewardship at scale (roles, capacity, incentives) â cross-domain decision-making for shared metrics â enablement: templates, playbooks, tooling support If you want to start fast: Pick the 10 metrics that run the business. Assign an owner. Define decision rights + escalation. Then build the data products around them. â ðð³ ðð¼ð ðð®ð»ð ðð¼ ððð®ð ð®ðµð²ð®ð± ð®ð ðð ð¿ð²ððµð®ð½ð²ð ðð¼ð¿ð¸ ð®ð»ð± ð¯ððð¶ð»ð²ðð, ðð¼ð ðð¶ð¹ð¹ ð´ð²ð ð® ð¹ð¼ð ð¼ð³ ðð®ð¹ðð² ð³ð¿ð¼ðº ðºð ð³ð¿ð²ð² ð»ð²ððð¹ð²ððð²ð¿: https://lnkd.in/dbf74Y9E
Productivity
Explore top LinkedIn content from expert professionals.
-
-
The most dangerous time of the day is the afternoon, and science proves it. Your afternoon slump isnât just about feeling tired. It's way worse than that. Research shows that standardized test scores drop in the afternoon. Anesthesia errors are three times more likely at 3 PM than at 9 AM. Doctors find fewer polyps and colonoscopies later in the day. Car accidents spike between 2 PM and 4 PM. Here's the thing, your brain just doesn't perform at its best in the afternoon. It's the trough of your day, a biological dip in energy and focus about seven hours after you wake up. So how do you beat it? Here are three simple fixes: Number one, schedule your most important work in the morning. Number two, take a strategic break. Research shows even 10 minutes helps. Number three, avoid making big decisions between 2 PM and 4 PM. Afternoons are risky, but now you know how to outsmart them.
-
STUDY FINDS COST PER WEAR INFORMATION SHIFTS SHOPPERS TO QUALITY: A new study published in Psychology & Marketing offers a fascinating look at what fashion drives fashion purchasing decisions. Researchers from the University of Bath and Cambridge University found that simply showing consumers the cost per wear (CPW) of garments (price divided by the number of times an item can be worn) can shift preferences away from cheap, low-quality clothing toward higher-priced, longer-lasting options. The findings draw on behavioural psychology to reveal that people respond more to perceived 'economic value' than to abstract sustainability messages. When shoppers could compare CPW between garments, and especially when figures were backed by trusted certification, they were far more likely to choose quality over quantity. The authors suggest CPW could be a powerful tool for brands and policymakers seeking to reframe sustainability as smart spending. Full story in comments.
-
Law alone is no longer enough. Clients today donât just want a memo on risk. They want to know how that risk impacts their product launch, their valuation, and their compliance in a world driven by AI and global regulation. This is why multidisciplinary legal teams are emerging as winners. Lawyers who collaborate with economists, engineers, coders, and policy experts arenât sidelined. They lead. They shape strategy, deliver clarity, and redefine value for clients. Iâve explored this shift in my latest column. Multidisciplinary Legal Teams Are Winning: Hereâs Why (See below). Would love to hear your take. Are we ready to break away from traditional models and embrace hybrid teams? #Law #LegalInnovation #FutureOfWork #Leadership #LegalProfession #Strategy
-
Everyone wants AI. But what are they actually funding? According to Deloitteâs latest survey of 600 manufacturing executives, the answer is clear: Theyâre funding data ðð¨ð®ð§ðððð¢ð¨ð§ð¬. Theyâre funding ðð¨ð§ð§ðððð¢ð¯ð¢ðð². Theyâre funding automation ð¢ð§ðð«ðð¬ðð«ð®ððð®ð«ð. Theyâre not buying the hypeâtheyâre building the ðððð¤ðð¨ð§ð. ⢠ðð% of manufacturers are spending more than 20% of their improvement budgets on smart manufacturing. ⢠ðð% say data analytics is a top investment priority. ⢠ðð% are putting cloud and AI next. ⢠ðð% are focused on active sensorsâthe eyes of their factories. Why? Because without clean, connected, contextualized data, none of the shiny stuff works. This isnât a pilot phase. This is the build phaseâand itâs quietly transforming how factories think, sense, and act. Despite all the tech, the lowest maturity score? ðð®ð¦ðð§ ððð©ð¢ððð¥. Manufacturers know the systems are coming online. Now theyâre scrambling to bring the people along. So if you're a manufacturer still working off spreadsheets and tribal knowledgeâknow this: Your competitors arenât just automating. Theyâre upgrading their operational IQ. And if youâre not investing in your digital foundation today⦠Youâre budgeting for irrelevance tomorrow. ðððð ðð®ð¥ð¥ ð«ðð©ð¨ð«ð: https://lnkd.in/e6_QsJcw ******************************************* ⢠Visit www.jeffwinterinsights.com for access to all my content and to stay current on Industry 4.0 and other cool tech trends ⢠Ring the ð for notifications!
-
In our new paper we ran an experiment at Procter and Gamble with 776 experienced professionals solving real business problems. We found that individuals randomly assiged to use AI did as well as a team of two without AI. And AI-augmented teams produced more exceptional solutions. The teams using AI were happier as well. Even more interesting: AI broke down professional silos. R&D people with AI produced more commercial work and commercial people with AI had more technical solutions. The standard model of "AI as productivity tool" may be too limiting. Todayâs AI can function as a kind of teammate, offering better performance, expertise sharing, and even positive emotional experiences. This was a massive team effort with work led by Fabrizio Dell'Acqua, Charles Ayoubi, and Karim Lakhani along with Hila Lifshitz, Raffaella Sadun, Lilach M., me and our partners at P&G: Yi Han, Jeff Goldman, Hari Nair and Stewart Taub Subatack about the work here: https://lnkd.in/ehJr8CxM Paper: https://lnkd.in/e-ZGZmW9
-
Apache Spark has levels to it: - Level 0 You can run spark-shell or pyspark, it means you can start - Level 1 You understand the Spark execution model: ⢠RDDs vs DataFrames vs Datasets ⢠Transformations (map, filter, groupBy, join) vs Actions (collect, count, show) ⢠Lazy execution & DAG (Directed Acyclic Graph) Master these concepts, and youâll have a solid foundation - Level 2 Optimizing Spark Queries ⢠Understand Catalyst Optimizer and how it rewrites queries for efficiency. ⢠Master columnar storage and Parquet vs JSON vs CSV. ⢠Use broadcast joins to avoid shuffle nightmares ⢠Shuffle operations are expensive. Reduce them with partitioning and good data modeling ⢠Coalesce vs Repartitionâknow when to use them. ⢠Avoid UDFs unless absolutely necessary (they bypass Catalyst optimization). Level 3 Tuning for Performance at Scale ⢠Master spark.sql.autoBroadcastJoinThreshold. ⢠Understand how Task Parallelism works and set spark.sql.shuffle.partitions properly. ⢠Skewed Data? Use adaptive execution! ⢠Use EXPLAIN and queryExecution.debug to analyze execution plans. - Level 4 Deep Dive into Cluster Resource Management ⢠Spark on YARN vs Kubernetes vs Standaloneâknow the tradeoffs. ⢠Understand Executor vs Driver Memoryâtune spark.executor.memory and spark.driver.memory. ⢠Dynamic allocation (spark.dynamicAllocation.enabled=true) can save costs. ⢠When to use RDDs over DataFrames (spoiler: almost never). What else did I miss for mastering Spark and distributed compute?
-
ð ðð¼ð¼ð¸ð¶ð»ð´ ð¶ð»ðð¶ð±ð² ð®ð» ð®ð°ððð®ð¹ AMD ð°ðµð¶ð½! ð² Here's a bit of a Ryzen processor made on TSMC's 7-nanometer node. You can see the web of interconnects, the metal wires that connect the transistors (that bottom layer) on a chip to harness their computing power. The image was taken with a new ð½ððð°ðµð¼ð´ð¿ð®ð½ðµð¶ð° ð«-ð¿ð®ð ð¹ð®ðºð¶ð»ð¼ð´ð¿ð®ð½ðµð (ð£ðð«ð) technique out of the PSI Paul Scherrer Institut, University of Southern California and ETH Zürich. The technique currently has 4 nanometer resolution and the scientists have a path to get to 1 nm resolution. The cool thing about this technology is its non-destructive imaging power to help find defects in chips. Todayâs chips are so complicated that electrical tests alone can no longer pinpoint where a defect is: chipmakers use a mix of optical imaging and other methods to zero in on potential problem areas. They then image such areas with a slow but very high-resolution scanning electron microscope. Finally they might take a slice of a chip for further imaging with a transmission electron microscope (TEM). When they find the flaw, they can then go back and correct their design. But with PyXL, they have another tool to pinpoint defects without destroying the chip. â¨
-
ð Accessibility For Designers Checklist (PDF: https://lnkd.in/e9Z2G2kF), a practical set of cards on WCAG accessibility guidelines, from accessible color, typography, animations, media, layout and development â to kick-off accessibility conversations early on. Kindly put together by Geri Reid. WCAG for Designers Checklist, by Geri Reid Article: https://lnkd.in/ef8-Yy9E PDF: https://lnkd.in/e9Z2G2kF WCAG 2.2 Guidelines: https://lnkd.in/eYmzrNh7 Accessibility isnât about compliance. Itâs not about ticking off checkboxes. And itâs not about plugging in accessibility overlays or AI engines either. Itâs about *designing* with a wide range of people in mind â from the very start, independent of their skills and preferences. In my experience, the most impactful way to embed accessibility in your work is to bring a handful of people with different needs early into design process and usability testing. Itâs making these test sessions accessible to the entire team, and showing real impact of design and code on real people using a real product. Teams usually donât get time to work on features which donât have a clear business case. But no manager really wants to be seen publicly ignoring their prospect customers. Visualize accessibility to everyone on the team and try to make an argument about potential reach and potential income. Donât ask for big commitments: embed accessibility in your work by default. Account for accessibility needs in your estimates. Create accessibility tickets and flag accessibility issues. Donât mistake smiling and nodding for support â establish timelines, roles, specifics, objectives. And most importantly: measure the impact of your work by repeatedly conducting accessibility testing with real people. Build a strong before/after case to show the change that the team has enabled and contributed to, and celebrate small and big accessibility wins. It might not sound like much, but it can start changing the culture faster than you think. Useful resources: Giving A Damn About Accessibility, by Sheri Byrne-Haber (disabled) https://lnkd.in/eCeFutuJ Accessibility For Designers: Where Do I Start?, by Stéphanie Walter https://lnkd.in/ecG5qASY Web Accessibility In Plain Language (Free Book), by Charlie Triplett https://lnkd.in/e2AMAwyt Building Accessibility Research Practices, by Maya Alvarado https://lnkd.in/eq_3zSPJ How To Build A Strong Case For Accessibility, â³ https://lnkd.in/ehGivAdY, by ð¦ Todd Libby â³ https://lnkd.in/eC4jehMX, by Yichan Wang #ux #accessibility
-
Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Hereâs code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applicationsâ results. If youâre interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]