The first ever intellectual shock came when I was anxiously sitting in my Philosophy 101 class. I wasn’t anxious about the actual class itself — philosophy seemed like an abstract, ancient but still “passable” subject for a young college student that had no prior experience with metaphysics; I was anxious because it was my first year taking a class in English entirely and was worried that I would’t be able to catch up. Our professor had an interesting background — got his PhD in computer science at Stanford and then was like nah I wasn’t about to do that all my life so he went ahead and got a PhD in Philosophy afterwards. In that class, he introduced the first philosopher he thought all of us confused kids should know. It was Rene Descartes. At that time, for me to comprehend English, it had to go through some kind of central translation mechanism both ways. The whole time I felt like I was living in my own delayed DVR, recording what’s happening now only to understand it ten seconds later. That ebb-and-flow delay effect might have likely contributed to its impact as by the end of the class I found myself ghostly staring at the handout, still processing and shocked at what I had just gone through. Till this day, I still remember vividly the quick conversation I had with a classmate after the class.
“Wow, that was…” I exclaimed while unable to find a word in my small vocabulary to properly describe my feelings.
“Yeah, so genius right?”
If genius means sweeping your mind over to a completely new dimension and changing how you look at things ever since, I guess that’d be the word.
Reading Meditations I to IV (yes, at that time only up to IV), shoddily printed on the handouts prepared by the thoughtful professor, was a mind-blowing dopamine rush I never knew I could have.
I would only experience that again almost two decades later when I read Leibniz, after dealing with bouts of depression and personal struggles that could all be avoided if I didn’t manage to make every wrong life decision that I could possibly make.
I am not randomly bringing up Descartes just because I’ve had this tendency of namedropping greatest thinkers in every essay I wrote recently. As a matter of fact, he wasn’t even on the “moodboard” when I started coding GEN ai (to be discussed soon) but like how most stories would go — in one of those nights, about two weeks ago, his smiling face popped up as I was dealing with the fundamental problem of GEN:
What’s being it?
As the man that forever changed the understanding of substance, Descartes would uncannily inspire and influence a scrawny little absurd program almost 500 years later that hopes to recode substance properties into meaningful representations.
That program is GEN ai and I am going to introduce some of its key concepts here today. We’ll also revisit Descartes a bit later when we talk about dualism.
If you do not intend to read any further, GEN ai is designed to be the “cognitive-like” or quasi-cognitive engine of any program that it is built into. Calling it cognitive-like is a bit unsettling as it’s indeed a stretch (I would call myself a fraud if I ever went out trying to sell that to hungry and foolish investors), but it would otherwise be too abstruse to explain. In the last post, I mentioned that I was going to write about the Mill (the ethics machine) next, but then I realized that the Mill would not exist without the GEN ai engine running at its best pace. GEN is also key to the Trinity Problem, introduced in the last post as well, and Dawn, as he gradually learns how to read and understand what he reads (quasi-cognitive capacity).
GEN ai nomenclature
I guess most people would assume that GEN ai stands for General A.I. or something along that line. In fact, it cannot get any farther from that. GEN ai, 兼爱, is an ancient Chinese philosophy developed by Mozi around 400 BC (the pronunciation is basically /gen eye/ hence the spelling). While most Westerners are understandably more familiar with Confucianism, adopted mainly as a political instrument throughout history for the Emperors to rule and oppress citizens, I find the other schools birthed in that same period much more interesting, among which is Mohism. To use a modern term, Mozi would be one of the pioneers in social equality and his ideas such as the concept of Love would find a much wider and more enthusiastic audience today than over 2500 years ago. Many have translated Mohisian Love as universal love as he advocated for loving and caring for all people equally, contrary to that of Confucianism. However, I’d prefer to interpret and understand it as generic or general love. Not that much different from the official translation I suppose, it, however, provides a subtle first-person perspective on who is to Love and whom is to be Loved. To me, that matters.
Understand GEN ai
Why would I term the program GEN ai then? Of course, it’s more than pure celebration of an overlooked school of thought that remained unknown to most people. Back when I was discussing Dawn, I touched very briefly on the ultimate motivation behind his seemingly romantic origin. I wouldn’t be happy with a world where all the “connections” were based on money, power and class. Keep in mind that these connections were defined broadly as the formation and maintenance of relationships between any two objects, abstract or concrete, that could interact with each other, from meeting new people to using new technologies. For example, how do we get to know about a new product these days? Marketing in its various forms would usually be the answer. In an oversaturated market, how does marketing work? The definitive answer is money. Any effort in marketing is denominated in dollars. What happens if someone that does not have money wants to sell his legitimate products? He turns to people with money or he perishes. So, in the end, is this person selling his products or selling the power of his existing money? To phrase it better, are we buying products or are we simply succumbing to power? But the even more dangerous part is that such disposition is so self-fortifying. After going through a thousand “tokens” like him with his products, the market is essentially dictated by a select group of powerful people whose only intent is to pick the right products to maximize their collective profits. Imagine in a capital market that is heavily invested in fossil fuel (not just the gas but anything that derives from it, including things like plastics, rubber, etc.), now how much sense does it make for that market to genuinely support the development of alternative energy, which will inevitably hurt their interests no matter how much they can benefit from the new industry? I always say that the current state of our society is radicalized capitalism, a direct result of tens of millions of self-fortifying cycles like that. It’s a paradox as well as deadlock, which can only be broken in three ways:
- The capitalists have an epiphany and invest against their own interests.
- New capitalists somehow emerge and in order for them to compete with older generations they have to invest in different things.
- The market itself becomes independent and somewhat “conscious,” making it possible to generate and maintain a “free market,” and reinstall the invisible hand.
Being quite pessimistic about human nature and manmade systems (I just don’t see how human beings can overcome egoism), I have serious doubts about the feasibility of the first two options. Then there leaves the third and last potential solution, an ideology Dawn is built upon — a medium, in our case, a de-facto resource management and distribution mechanism that functions like the market in its general sense, can be reliably just, free and robust if and only if such mechanism is cognitive and thereby potentially “conscious.” It won’t be as difficult to understand if we imagine the market, essentially supplies and demands of lots of people, as a giant person or, more conservatively, the will of a massive group of people. If we want to claim that one does exist substantially and freely, we have to prove that they have the ability to think (for themselves).
Sounds familiar? Sure, as Descartes famously said:
Cogito, ergo sum. (I think, therefore I am)
A cognition-capable mechanism is thus not only a water pipe system that carries water to the destinations but the mind of its architect at the same time as well. It will understand that it’s transporting a substance called water to end parties that need this substance called water (for irrigation, consumption and so on), from which the value of this substance called water is calculated. To use a more accessible example, a market with consciousness is capable of knowing what it can access and who might be interested in what it can access, without relying on external metrics that are simply paid for, such as SEO rankings. If Vincent van Gogh paints a painting, such a system will know its value and delivers it to people that care about art without validating its “valuebleness” through a backlinked ad on artnet.com. Before one jumps up and says well that’s van Gogh for God’s sake, just remember that he was totally unknown when he was alive and died in obscurity. My personal inspiration, though, is Nick Drake, who Dawn is rightfully named after. When he was 20, he penned a song titled Fruit Tree that goes:
Life is but a memory
Happened long ago
Theatre full of sadness
For a long-forgotten show
Seems so easy
Just to let it go on by
’Til you stop and wonder
Why you never wondered why
Safe in the womb
Of an everlasting night
You’ll find the darkness can
Give the brightest light
Safe in your place deep in the earth
That’s when they’ll know what you were really worth
Right, I decided that I had to create a medium infrastructure that would one day be capable of independent “thinking” (and thus “evaluation”) because I wouldn’t want another aloof non-conforming talent to end up a fruit tree. If there’s anything our world is desperately lacking today, it must be real talent (and ethics). Of course, it was also built to be applied to things beyond the scope of art. From a well-designed product that can help save the planet to a timid sage whose ideas can herald the next Enlightenment, anything that is valuable because they are intrinsically valuable (“valuable-in-itself”) can take advantage of such a “to-be-is-to-be-understood” system. As for the more commercial aspects, such as marketing and advertising, I’ll discuss its impact another time at length.
GEN is an attempt as well as ambition to wire this cogito mechanism into the two-sided infrastructure, namely, Dawn, the resource side, and LOVN (edited due to systemwide renaming, originally Lova, 11/20/2023), the consuming end so that the “valuable-in-itself” test can be done subjectively, in a human-centric way. The name GEN ai rather carries my hope that the system could grow into a force that protects and enhances equality, in the near future when polarization reaches the its breaking point and a new order is to be installed. I referenced Nick (for anyone that is not familiar with him, he passed away in 1974) and van Gogh, but, in fact, it is a lot more meaningful in this age of information. Is the Internet really neutral (loving everyone equally) or is it perceived to be neutral (making them feel that they are being loved equally)? Is the Internet helping us become more informed or, rather, overloaded with information that’s engineered a certain way? If the search engine ultimately ranks sites and pages by company scale and marketing budget, then what are we really searching for or, even worse, what do we actually find at the end of the day? On a different note, I put quasi in front of the term because being truly cognitive, even if we do not get into the discussion of the Turing test, is basically a dream at this stage. While GEN contributes significantly to my Trinity Problem, it cannot evolve further until I solve the unique representation problem (not being dramatic or anything, I hope to finish that by the time I die). For the same reason, I refuse to use the term Artificial Intelligence anywhere. It could undoubtedly change the perception of a product (it’s indeed a magic word these days), but what can perception do for me and the actual problems that I do care about?
GEN ai breakdown
In the remaining of this article, I am going to quickly go through how GEN is generally designed, its current status and what I am going to do next. Since GEN is designed as part of the representation/encryption algorithm lying at the heart of the Trinity problem, the discussion of how it is constructed will be on a conceptual level.
The core structure of GEN is somewhat analogous to that of the human genome (much simpler and diluted of course). Functionality-wise, it can be seen as a system-wide translation layer that serves the purpose to decompose “substance” into digestible bits. Largely inspired by Descartes and, also, Kant, this pair-based coding system is designed to analyze and represent things in a deep mapping environment. Descartes believed that there were two types of substance: matter and mind. Matter was anything that could be spatially extended while mind was all the thinking substance. And then of course, there’s God, the ultimate being. The GEN ai model inherits its spirit by viewing things from three different dimensions:
- What is it? A round table with three distinctive dents.
- What is it like? Quite old-school and shabby.
- What does it make others like? Nostalgic since it reminds me of the dinner table in my late grandma’s house. (Note this dimension needs interaction with LOVN’s GEN)
The first dimension is the industrial standard labeling method. When we post an article on Medium, our content is labeled this way. It’s quick, heuristic and perfect for search engine. When you search for something in our terminal, this dimension will always be visited and fulfilled first. In the end, if you want to find a dress, it’s horrendous to fetch you vacuum cleaners, no matter how mind-blowing or life-saving they are. The next two dimensions of GEN are the real challenges, elevating this otherwise categorical labeling system to something more continuous, cognitive-ish and natural.
Admittedly, when I was working on this structure, I could see the convincing shadow of a Freudian model (id, ego and superego). Coming from a cognitive science background, though a lifetime ago, I also used, directly and indirectly, several cognitive psychology models for inspiration (e.g., schema, Geon, prototype matching, etc.).
Like I explained, I couldn’t publicize the exact pairings (96 pairs vertically at the moment), but I’d say many of them are unconventional metrics, covering aspects from aesthetics (“dark” — “vibrant”) to ideologies (“anarchism” — “communism”).
Data for GEN
Naturally, the next question is what type of data I feed into GEN for processing and how the coding process actually operates. At the moment, most of the data is from first-degree web extraction and the NLP level is embarrassingly basic (words, terms, phrases, similarities, sentiment, etc.). It is so limited because I am yet to develop a more aggressive algorithm to dynamically construct a responsive knowledge base. For example, if a brand is Leaping Bunny certified, Dawn will increase its number for animal welfare, one of the built-in biased ranking factors, because the Leaping Bunny certification sufficiently entails that. However, everything is not Leaping Bunny and, in most cases, such direct mapping is unavailable. An advanced algorithm is critically needed to efficiently reorganize existing external resources, such as Wikipedia and Britannica, into such lean and comprehensive query-ready structure, essentially adding yet another intermediary layer. On another related front, the limitation also stems from the embedded NLP program itself. At the moment, it could not grasp meanings beyond textbook definitions. It couldn’t tell a Karen from a Becky. In a word, the NLP program needs to be context-aware as well. It would take into account current world events and, more importantly, cultures. To be honest, I do include unique cultural profiles in the program, such as DaBaby and Snoopdogg, to facilitate profiling for that purpose already.
Coding and Working
If we can have any data we can think of, how do we write a program to predict what Peter Griffin would do at 11:05 am this coming Monday?
I think just about anyone, data scientist or not, will say we can gather all the data on what Peter has been doing every Monday or every April 25th at 11:05 am and deduce from there. I don’t think there will be too many sane people out there that think getting data on every middle-aged white men in America and averaging out a probability would give a better predication.
However, the latter approach is usually the common method in machine learning programs. Human beings are diminished to data points, which are then fed into various formulas and fitted on various curves.
That hit me when I was painfully encoding GEN the other day (I’d finish three movies and two TV shows by the end of it, coincidentally, all Korean). I was always sort of aware of the reality but assumed it was because, well, data collecting would be impossible the other way around. I had even wondered what Leibniz, arguably father of computer science, would do to code his monads into his numerical system for these monads, his understanding of substances, are so absolutely singular. Coding GEN, though, gave me a new perspective and an egoistic alternative theory — it’s not just about the data but also about the people that work on the data and devise sets of algorithm to process those data. What does it mean? In short, I have doubts about the predominant teamwork method in ML development. It reminds me of Russell’s Paradox, something I somehow keep running into these days (I’ll share another one at the end). While data plus data plus data of data is still data, can we say the same about people or, to be precise, minds? No matter how effective team management can get, can we really confidently say a group of brains equate a brain of a group (see “China Brain” if you are interested, not really related to the discussion here but an interesting thought experiment I’d like to bring up)? If the initiator is not categorically human, then how can we expect such an entity to train another machine to have human-like intelligence (“Artificial Intelligence”)? Even I, who obviously cannot afford a team (or a proper dinner in New York these days), had to fight off three other voices in my head when teaching GEN to understand why DaBaby was cancelled. To go slightly further, in a world that is deeply and systemically module-oriented, again, which boosts efficiency at the cost of individuality and humanity, what are we really training and learning for?
[Due to the word limit, I’ll continue to discuss the computation part of GEN in the next post when I talk about the Mill, which counts on GEN to quantify general ethics.]
As promised, here’s that self-referential paradox I mentioned and just by thinking about it could drive me into madness:
If I am developing Dawn into a system that can host and discover intrinsically valuable entities, then can Dawn, basically as a set of all sets, discover himself? But, I guess, a more meaningful question in this very moment will be — how can Dawn be discovered in the first place?