https://www.prediksi-score.co/ https://www.prediksi-rtp.co/ https://sprr.org/ http://pakde4drezeki.com/ https://exipple.com/ https://137.184.132.172/ https://147.182.217.233/ https://pakde4d.crackerjackplayers.com/ https://www.goddesshuntress.com/ https://heylink.me/Gopaytogelhoki/ https://www.ppa-group.com/ https://linkr.bio/gopay.togel/ https://heylink.me/Gopaytogelterpercaya/ https://desty.page/gopay_togel/ https://bento.me/gopaytogel/ https://mez.ink/daftargopaytogel/ https://bizbuilderuniversity.com/ https://cappadociatoursandtravel.com/ https://gopay.asia/ https://endlesssun-nj.com/ https://blmyeg.com/ https://bantengputih.com/ https://monopricehub.com/ https://outtatheparksauce.com/ https://www.earthsystems.net/ https://www.wordpirates.com/ https://dj-figo.com/ https://165.232.165.42/ https://165.232.165.52/ https://english-forum.com/ https://www.petrockfest.com/ https://eckoto.net/ premantoto Pakde4d https://goitour.com.vn/css/ http://coralino.com/gopay/ http://ontransportesyservicios.com/css/ https://www.salemskates.com/2000/ https://www.desarrolloweb.mx/firmas/ premantoto premantoto premantoto https://bonbonchu.com/ juraganbola https://meinhardtvineyards.com/ https://heylink.me/PremantotoAlternatif/ https://danielcuthbert.com/ premantoto premantoto https://www.theindependentproject.org/ https://161.35.6.244/ https://67.207.80.19/ https://134.122.19.250/ https://mezzofanti.org/ gopaytogel https://mongoliainvestmentsummit.com/ gopay togel amanahtoto https://habibideal.com/ https://137.184.202.97/ https://161.35.115.113/ amanahtoto amanahtoto PAKDE4D https://www.genevaworldwide.com https://www.holmesbrakel.com https://159.223.191.207/ Link Togel Terbaru slot bet 200 perak pg soft slot qris resmi 2024 Bandar slot resmi togel deposit pulsa 5000 amanahtoto amanahtoto premantoto amanahtoto pakde4d https://jaki.pta-bandung.go.id/css/inspirasi/ https://danmihalkogallery.com/ Amanahtoto Amanahtoto https://photolamancha.com/ slot bet 200 perak https://www.arlingtontrotters.com/ https://147.182.161.99/ https://142.93.119.205/ Gopaytogel Gopaytogel Amanahtoto https://holebileeuw.org/ Gopaytogel Gopaytogel Gopaytogel https://www.teqmarq.com/ Gopaytogel Gopaytogel Gopaytogel Gopaytogel premantoto premantoto premantoto premantoto premantoto premantoto premantoto premantoto premantoto amanahtoto https://apostilaconcurso.org/ https://benfenske.com/ https://dueguardsecurity.com/ https://vilasacanada.com/ https://juraganbola-mkt.com/ https://soyaom.com/ slot hoki slot online premantoto premantoto premantoto premantoto premantoto premantoto premantoto premantoto Amanahtoto Amanahtoto https://vivirencasasgrandes.com/ amanahtoto premantoto premantoto premantoto premantoto premantoto premantoto juraganbola juraganbola

Quick Thoughts on AI and the 80-20 Rule

One of the first things I learned in AI, many decades ago, was that the 80-20 rule, also known as the Pareto Principle, generally holds. These early AI systems, called Expert Systems, are based on “rules and “facts” programmed into the knowledge base. These rules, when selected by input data, (e.g., patient symptoms) could draw inferences (e.g., medical diagnosis). A few rules work well, but many additional rules are needed to hit the edge cases. An intriguing question is whether today’s data-driven generative AI (gAI) systems follow the same rule with data. Do we get reasonable performance with small data sets, and does training the AI with more and more data have diminishing returns?
If we follow the AI hype, there is a lot of discussion about the big foundational models like ChatGPT and Bard that are trained by large tech companies on massive amounts of public data (and some not-so-public data that come with a host of legal questions). From the discourse, it seems like data network effects are in play, and more data, bigger models improve performance. However, it seems that these companies are running out of data. Open AI, for instance, had to resort to training runs on audio from YouTube videos to get data for GPT4. Some predictions indicate that quality data on the Internet will run out in a year or two. The improved capabilities of these models, like passing the medical and bar exam, seem to indicate that more data pays off in spades. But does it? The large models are difficult to handle and very expensive to train (think electricity costs), and there just may be diminishing returns on data.
So, what is the alternative? One option is to synthetically generate data. If AI generates the data it is trained on, regardless of the “smartness” of the AI, there will be a reinforcing loop that would acerbate problems in the data. Or we could get one AI to check another AI’s data – perhaps mitigating a bit of the reinforcement problem. Further, buttressing low data with human expertise or laws of physics (e.g., to better understand physical environments) can help.
Or we could modify the Pareto Principle to include tradeoffs between size and quality. High-volume data can be traded off for quality. It is true that large models hallucinate, particularly in areas where data is sparse or noisy. Not all gAIs can be experts in everything. So, smaller models with dense data bounded in a specialized area can perform very well. Moreover, these are cheaper and easier to manage. They might also be easier to trace and understand, making the AI more explainable.
So where does that leave us? We have large foundational models with their own products pushed by Big Tech. These could be accessed via API and fine-tuned to individual needs. These models and their access and fine-tuning are largely controlled by their developers. We have small models trained with local, proprietary data. These models are often open source and affordable for smaller and mid-sized companies. But here too, the more specialized the domain, the tougher it is to get adequate data for training. We could also have models trained on synthetic data. This will be particularly useful in fields where observational data is limited (astrophysics) and experimental data is expensive (material science). More recently, there has been the emergence of decentralized AI, boosted by blockchain technology, where data is secured across a network of nodes. This opens the door for a more accountable, scalable, and cooperative approach to AI applications.
In conclusion, the Pareto Principle is somewhat fuzzy in the AI context. It’s not only about the size of datasets, but the key question is what characteristics of the 20% of data will provide the 80% impact on AI outputs.
Picture of Varun Grover

Varun Grover

George and Boyce Billingsley Endowed Chair and Distinguished Professor, Walton College of Business at University of Arkansas

Share this article: