Slouching Towards Renaissance

for life is not a paragraph
32 min readNov 20, 2023


Years ago, I was venturing in the art industry. The glamorous New York art scene was perniciously charming and the illusion of being part of it seemed like the easiest escape I could find to hide from the terrifyingly honest and humbling reality. There’s frankly not much to say about that chapter in my life other than that I had many necessary and unnecessary regrets. I did, though, discover a profound interest in the Renaissance period, while researching and dealing with the art from those days. Despite the faint inexplicable connection I had always felt with that era, I would not have imagined or believed that years later it would inspire and guide me to create a suite of computer programs. Then I think it is only fitting to name this set of programs and any future one that follows the same philosophy (and spirit), Renaissance.

In short, Renaissance is a digital identity scheme I am developing to address some of the imminent challenges that arise in a more decentralized Internet. Hinted above, it was so named for its embodiment of the essence of the splendid Renaissance period, humanism. It also goes beyond that. The ideological transition from the Middle Ages to the Renaissance, uncannily, mirrors the technological revolution and, likely as a result, the social change we are heading into. I mentioned this paradigm shift a few times in my previous notes, but now with Renaissance coming into shape — we can, finally, take the conversation to the next stage. While I did introduce some key concepts in the last note back in May, I’ll elaborate further on the underlying motivation, structural design and practical usage of the Renaissance, respectively, in this write-up. As the discussion will mainly focus on the conceptual aspects, please visit and make use of the Renaissance showcase to see how Renaissance actually works on the application level.

The center cannot hold

The ultimate purpose of MOFFAS, short for Mutuality-Oriented Free-Form Allocation Scheme, is to install an alternate market system that supports as well as thrives on decentralizing both the supply (represented by “Dawn”) and demand sides (facilitated by the “Lovn” framework). In this system, mutually empowered resources and consuming parties have the authentic freedom to form relations, solely based on “intrinsic values,” without the involvement and supervision of powerful intermediaries. It would then become possible for dynamic and spontaneous transactions to take place between the two autonomous parties, instead of routing through an instance created and managed by a phantom yet controlling third-party authority. Obviously, the fundamental barrier to this idealized (utopian) model is how to realize “decentralization” without intermediaries when the Internet is a de facto intermediary of intermediaries. It wouldn’t be easy a decade ago to even introduce the idea, but thanks to the frenzy of cryptocurrencies, decentralization, as a concept, has steadily and favorably established itself in public discourse. However, while the blockchain movement is progressive, decentralization of this sort is largely executed at higher layers (what things do), such as services and applications, as opposed to lower ones (what things are). Of course, one could always argue that cryptocurrencies are also “what things are” by themselves. We will circle back to this comparison a little later.

For a MOFFAS-type market to function, decentralization has to run on the atomic level. More specifically, it is the resources themselves that have to be decentralized, rather than the events (services) attached to or associated with them. To better explain the differences, we have to revisit/resume the painful conversation on the metaphysics of Dawn. I’ve mentioned it in the last note that a Dawn Resource Object (DOBJ) is not an actual data file stored on a hard disk but a print of its Ling instead. Ling, a term borrowed from Chinese, originally means the spirit, energy and soul of something; here, with DOBJs, Ling is better understood as the “being” of something, if we all agree that something is there to be and to be perceived in the first place. Note that Dawn is built with the axiom that everything has a unique Ling, which the system learns to acknowledge, recognize and work with. A short clip of my cat scratching my face has its own immutable Ling, which persists through sessions regardless of technicalities or my will. These sessions, named DOBJ Instances, are the deliveries of DOBJs to recipients in the real life, whether as TikTok videos or streaming bits. If we view a resource target by itself as a Kant’s Thing-in-itself, then instances can be thought of as its manifestations/phenomena. The data behind the video file might vary drastically from one instance to another due to a wide variety of reasons, and yet they represent the same distinct remembrance of my embarrassment. At the same time, instances are experience-focused and presence of recipients is thus of significant importance as no experience can be complete or valid without a subject. This dependency is, to some extent, comparable to the observer effect in physics. In my last note on Dawn, I rejected the Cartesian dualist analogy in favor of Spinoza’s monist view when attempting to illustrate the nature of DOBJs. However, at a macro scale, I do find the dynamics between DOBJs and their instances better explained by Descartes’ theory of substance-attribute-mode. If we compare the original resource object or, to make it easier for our discussion, its Ling, to substance, then the DOBJ that Dawn generates for this Ling can be seen as its attribute and, in turn, the instances the modes for this DOBJ (as ways of being that DOBJ). The transitive dependence relation is also mirrored, with instances dependent on the DOBJs and the DOBJs on what they ultimately represent. It might sound underwhelming, but it is usually not how information systems work, where files are created and maintained discretely. On a side note, Spinoza has his own substance-attribute-mode theory, but the Monist perspective makes it too confusing to apply it in this practical case. Well, at least, I was defeated enough to give up on that.

The ontological discussion on Dawn Resource Objects and their instances may seem redundant and even pedantic at times. However, the nature of these entities is the absolute foundation of Dawn and what everything else is established on. It is in fact why the system is being built the way it is. After all, Dawn is about the being of resources.

Through this ontological lens, our inquiry shifts the focus from what a resource object is to what the resource object is like. It is now clear that a universal identity to translate an object from its innermost substance level to the recipient-facing mode level would be necessary, providing continuity that allows the instantiations of a DOBJ to be executed and distributed freely without a central authority. However, it begets another difficult question. What is identity? I had a full segment on this in the last post, but it barely scratched the surface. Before I drive myself mad, I suppose that we could settle for a much easier quest: how to represent an identity. If the greatest existential question of who am I is prohibitive, then, at the very least, a Driver’s License that has a matching photo, could, in some way, inform my identity. That’s exactly where Renaissance comes into the picture, to create a representation layer that functions like a unique identity and transform resources into the complete DOBJs capable of and ready for actions that are usually limited to platforms. If that still sounds uninspiring, with DOBJs as the elementary units of Dawn’s decentralized distribution, we can connect and interact through objects, instead of platform-like systems, thanks to the ability of being uniquely identified universally. Such liberation empowers both the supply and demand sides, laying the groundwork for a MOFFAS-type market. However, one could easily spot a huge problem with this model. Without ambitious algorithms implemented by platforms to manage and facilitate relationships, how do the free-roaming resources and consumers “meet” each other in MOFFAS? That’s exactly what GEN ai is for, but that’s for a different time.

Now that we’ve established the necessity and purpose of Renaissance, it’s time to explore the structure.

Surely some revelation is at hand

As of now, there are two programs under Renaissance, EvE and Seal, for the tasks of identification and verification, respectively. Renaissance EvE, a bottom-up process, works to identify DOBJs based on information collected from external sources while Renaissance Seal, a top-down process, verifies the integrity and validity of a DOBJ instance. Both programs interface with users, which means that they offer front-end services accessible to users directly. I’ll break down both programs, but the easiest way to learn about them is through their demos on the showcase website.

Renaissance Seal

Technically, Seal is a mechanism to check response data integrity, although the scope is wider than what the term normally implies. It is a looming hazard that I fear is as dangerous as the mental health crisis that has been sweeping the world; and just like the epidemic of mental illness, the issue of integrity is constantly overlooked because not looking and not dealing with it simply make everything so much easier. When you open your browser and read a news article on AP, you’d assume that it’s the article that you’ve requested. We are so conditioned that we take it for granted. Because clicking a link is always followed by seeing an article loaded on the screen, clicking delivers the article. However, this is not how the information system works and, if I am allowed to be frank, we are no better than Pavlov’s dogs (at least the treats they got were all real). When we click on something, what we eventually receive back on our end is determined by a few things, but our little finger tap is not among them. In most cases, it is up to the servers and, occasionally, savvy bad agents may sit midway to alter the responses sent by the servers. In the meanwhile, the local clients, such as our web browsers, can manipulate the end presentation as well. I was once so fed up with the election news cycle that I wrote a small extension to change photos of both Trump and Biden on any web page to funny cat pictures and their names to the most random combinations of words, like Red Cauliflower or Gourd Fire. That extension was actually the first thing I ever wrote and, despite its clumsiness, it probably saved my sanity.

So, in summary, getting the right data delivered to us is dependent on a healthy server that honors and fulfills our requests rigorously on a good day without transmission errors and shenanigans. It could also get complicated. Imagine a burned-out backend developer that intentionally taints the response every time you send a request, from something as trivial as the weather forecast to something as critical as a medical diagnosis. The developer would play the roles of a malfunctioning server and a compromised transmission at the same time, while neither of them is, technically, at fault. Admittedly, with a centralized Internet model in action, that is unlikely to happen as servers are obliged as well as motivated to make sure that only genuine responses are sent back to remain credible and reputable on behalf of their interested parties. On the other hand, a decentralized distribution environment is not conducive to such blind trust. To mitigate the challenge, certain mechanism has to be in place to perform inspection and verification. Over the years, many have been proposed and implemented; now Renaissance Seal adds to those efforts, with a somewhat odd approach and vantage point.

The standard methods commonly used to ensure data integrity are checksum and other hashing functions. There are quite a good number of algorithms available, but the core idea is shared: to derive a new block of data from the original object for comparison. They are effective, convenient and lightweight, but also limited in scope, such as the inability to protect against man-in-the-middle attacks (MitM). What matters the most here is that they won’t work with our conceptual DOBJs, either. Say, we have a photo of a man chasing a squirrel, man_after_squirrel.jpg, and the hash generated by the standard methods is a product of the image data of man_after_squirrel.jpg as and only as the way it is. There’s zero tolerance. If we resize the photo by 1% or change one pixel value, the hash would be completely different and, as a result, cause the integrity check to fail, even though it still looks like the same photo or the same DOBJ. Standard approaches like checksum compute hashes based on the data of the files in individual instances independently. Seal sets itself apart by producing such “hash” for the DOBJ across all their instances.

I’m bringing into discussion, qualia, another important concept borrowed from the fields of philosophy and psychology. Qualia are defined as instances of subjective experience of something. Within a DOBJ, they play a role in connecting the substance level of the object and the cognitive level thereof, where the aforementioned GEN ai mechanism mainly works from. Although qualia is naturally more relevant to the bottom-up EvE process for identification, it underlies Seal’s programming as well. Instead of verifying whether a file is an exact copy of man_after_squirrel.jpg, it checks whether it shows the same man chasing the same squirrel in the same backyard wearing the same cargo pants. Note that while modifications like scaling does not matter, others, such as cropping, do. The reasoning is clear. A resized image of a man chasing a squirrel in the backyard wearing cargo pants is still an image of a man chasing a squirrel in the backyard wearing cargo pants while a cropped one may become an image of a man exercising in the backyard wearing cargo pants. That is to say, it has become a different resource object and such change shall cause the integrity check to fail. Qualia is controversial and introducing it into our conversation invites more questions. We will come back to it when we get to EvE and, for now, the bottom line is that Seal is user-orientated. The subject of hashing is, however, not the only difference between Seal and the standard approaches. Normally, developers or resource owners would have the computed checksum published alongside the data it derives from. Once it is downloaded, the user may run a utility to verify the integrity. In other words, we are checking end deliveries of the response against each other, like a box of gifts and a packing list at the bottom claiming that it has everything. Apparently, those methods will not be able to fight off the aforementioned MitM attacks or work in a decentralized model, at all, as the checksum itself (the packing list) may not be authentic. Back to our surprise box analogy, an MitM attack occurs when the shipping company takes a few items from your box for their Christmas party and places an updated packing slip back in. How can we verify the gifts in the box as receivers then? The most intuitive answer is to check with the authority, in our example, the original sender of the box. In a decentralized distribution, when the one original sender is not available, such authority is usually achieved by deploying a set of shared ledgers. While shared ledgers is efficient and useful in many cases, it is too expensive for the basic data integrity verification as the first or the only option, which is expected to be done quickly and locally upon server response. Back to our little package challenge, how can we verify the items in the box with and only with the enclosed packing slip? The solution lies nowhere but in ourselves. While the power of computers has been humbling, I know what I know because I am who I am is still the best and most secure encryption algorithm that far exceeds what any machine could achieve. What if the slip in our gift box is in the format of a crossword puzzle that requires easy personal information to complete? It would instantly piece together a multi-point lock that secures the package. This is the (hidden) power of implicit knowledge, nowadays widely used in challenge–response tests, such as CAPTCHA, to detect bots and spammers. In the same spirit, Seal forgoes the common hash strings and taps into our knowledge with “meaningful hashes” that human users on the receiving end can resolve effortlessly and instantly. It doesn’t mean that Seal would ask you to painfully identify all the hydrants from a bunch of low-pixel pictures or remember a 14-digit passcode that you set two months ago. What it does is to instrumentalize common sense, by taking a step back from technology and focusing on humans. The past few decades saw the fascinating development and evolution of information technology; while that has provided unprecedented convenience and empowerment, like any technology, it comes at a cost. To witness people, myself included, gradually and unconsciously decapacitated and reduced to less of us is flatly saddening and, also, alarming. It may even seem inevitable at times as technology by nature is cumulative, infective and expansive. Have you ever just scanned a QR code nowadays because it feels automatic, whereas in fact there’s nothing to know from the densely packed dots in a square? The reality is that such programming probably makes us quite happy or as Roger Waters would say, comfortably numb; again, it would be rather stressful to calculate the possibility of it leading to my own demise every time I hold my phone over a QR code — so please spare me. I was constantly thinking about it when drawing up the basis of Seal. I wanted it to be humanistic, not only because it would make my method work but also because it reminds us, especially myself, that technologies should always put people first (animals and environment included), not productivity or profits behind the fabulous banners. We are the beneficiaries, rather than the training data, test data or a document in a database. When our society eventually finishes the transition from mass production, a process to turn a man into part of a machine, to mass automation, a process to turn a machine into another machine to replace that man, we must have an enforceable system in place to protect public interest and parity because our current model only sees this man, who is most of us, as overhead reduction on the quarterly report. That isn’t going to be easy and would require continuous awareness and determination that sometimes goes against our instincts and flawed human nature. Seal is built with that awareness and the not-so-hidden agenda to fight the temptation of being, well, comfortably numb.

Can we get more Renaissance than that?

As a matter of fact, I can.

In practice, the origin servers (senders) would not know much about the receivers other than what’s included in the requests, regardless of the distribution structure. A crossword with personal information might be a romantic idea, but it won’t work in real life. Instead, Seal would count on something that both sides absolutely have knowledge of. The first line of Bohemian Rhapsody would be one example; granted, it is a question I ask myself all the time, but Seal would take the fantasy route and travel even further back in time for inspiration. There are two medium types that Seal currently processes, text and image. For text-based data, Seal generates hashes with well-known poetry, implemented by the algorithm family of Renaissance Seal EEC, named after my favorite English language poet, E. E. Cummings, in spite of the fact that I had to drop his collection due to copyright concerns. For images, some of the most famous paintings in the Renaissance, Baroque and pre-Impressionism ages are utilized by Renaissance Seal PBE, named after my favorite painter, Pieter Brueghel the Elder. As mentioned above, Seal works with qualia, the subjective experience; therefore, EEC and PBE could both tolerate minor discrepancies. Inspired by print proofing in the ancient times, EEC matches the target text content with one of the EEC Seals, a poem, through computation. Note that the matching is timestamp-dependent and updated periodically, which compensates for the lack of accuracy because of the tolerance (false negatives). While developing EEC Seals for text-based DOBJs was quite straight-forward, I wasn’t that lucky with PBE. I struggled for a few months to find a reliable method to recognize the same DOBJ across its instances or qualia. For a text-based object, its instance objects usually differ in formatting. For example, EEC would be instructed to be insensitive to whether the Hamlet Act III is single-spaced or double-spaced, and that can be easily executed. On the other hand, images are much a more delicate situation; variation of any kind, regardless of the scale, would significantly alter the underlying image data, which PBE works on. I went through a series of experiments but couldn’t find one that had the right balance between accuracy and tolerance, i.e. the classical overfitting versus underfitting dilemma. Then on a rainy Sunday night (not as dramatic I’m making it sound), when I was staring blankly at the bathroom tiles, the sudden reminiscence of the mosaic wall of my childhood bathroom somehow led to an epiphany. It turned out that another mediation layer would be necessary to help reach the balance. Without introducing new variables to further complicate the situation, that layer would have to be formed or, in other words, found, internally. It was yet another case of one of my favorite subjects to ponder (for fun), self-referential, and the challenge then became one to create a self-checking test for images. For example, if we convert an image to black and white with L grayscale algorithm, subtract the median from all pixel values, and then add them up, would the sum be a specific number, such as zero or 1707 (Euler’s year of birth)? Of course, that would be silly and practically useless, but it is what self-referential means in our case. My PBE algorithm was much more playful; it was directly inspired by and modeled after the puzzle game, Sudoku. When we have an image object, the goal is to reach a complete grid after rounds of timestamp-based transformations, including the merging of PBE Seals (the paintings). Depending on the security level required, these transformations as well as the parameters of the grid can be tuned accordingly. Just like EEC, PBE leans towards leniency and tolerance as well while the loss of accuracy is made up by updating the Seals constantly. To go back to our package story one last time, if I fear that a genius thief somewhere is able to come up with an alternate solution to my crossword puzzle, for example, one that exploits the similarity between the spellings of beard and bread, I am happily confident to bet on that he or she cannot achieve that with a whole new puzzle every five seconds.

The biggest advantage of Seal is that it is tiny. The size of the Seal collections is under a few megabytes, which makes it easy for the clients to carry in their applications. In some cases where the set of keys is missing on the client side, it can still work since the poems and images can be retrieved from the Internet easily. In other cases where the checking utility is not provided, the computation is simple enough that, technically, it could be done by hand (Euler probably did more complicated ones with one eye). Again, with Seal, integrity check is ultimately accomplished by us, the users.

There’s a different dimension to data integrity that the Seals would be valuable for. In fact, this dimension is not normally categorized under data integrity but data authenticity instead. Note that authenticity, integrity, validity and veracity are all distinct concepts. The two respects concerned with security, authenticity and integrity, are vastly different; checksums, for example, widely used in integrity tests, do not work on authenticity. In the system of Dawn and DOBJs though, they do overlap since, again, DOBJs are the being of resource objects rather than specific data files. With the rise of Generative AI, it is becoming more and more difficult for human eyes or even commercial tools to detect forgeries, such as the infamous deepfake. Combined with the massive personal data brokerage that has been going on for the past decade, it is only a matter of time before the technology is so well-developed that it can put anyone in a compromised position. Generative AI models feed and train on the data of our own; the outputs are, ipso facto, not creations but multi-dimensional manipulations of our own data. However, that’s just the tip of the iceberg. The truth is that any computer file, no matter how technically superior it is, can be manipulated by, well, computers, and we’re only beginning to understand its true meaning and impact. One day when you are laughing at a realistic rendering of Donald Trump eating a hotdog in the India slums, a different realistic rendering of yourself in an intimate scene with someone other than your spouse may just quietly pop on the Internet. It would immediately get scanned, crawled and indexed by various services, spawning numerous copies across servers around the world. What’s worse is that if one realistic rendering can be generated, then chances are millions of them along with millions of victims are out there as well and that would easily overwhelm the law enforcement or any authority that the public rely on for justice and clarity. EEC/PBE could undoubtedly act as a preliminary safety measure if the Seal of the original object is accessible and the forgeries are not substantially generative. Furthermore, for images that are regionally modified, a locality-first PBE Seal, which protects an n x m object as a set of n/p*m/q p x q regions, could even inform discrepancies by coordinates.

Clearly, the access to the authentic Seal records would be the key. Having the Seal delivered within a DOBJ Instance to the users would be ideal, but the chance is feeble in situations where delinquencies are expected. We would likely have to have the infrastructure in place, providing universal access to the Seals, ideally in a decentralized distribution. However, despite the grandeur of this prospective universal authenticity validation mechanism, we have a more critical issue at hand. How do we fetch the Seal from the client side? How do we tell the service provider what we want when we don’t know what we are dealing with?

That’s what EvE is for.

To identify.

On a side yet serious note, since we are at it, what can we do to protect ourselves from forgeries that are (almost) impossible to detect when Generative AI takes the next leap? I’ve been thinking about it for quite some time now and yet cannot come up with one solution that does not involve setting up an ultimate authenticity device, an over-the-top snapshot-based self-tracing framework that requires a massive amount of encryption and a brand new set of the strictest protocols on transmission and data security. Instead of encoding objects, the solution, in some way, would be to encode time and being (sorry, cannot resist it) on a new dimension. Note that the snapshots are not necessarily images and objects are not necessarily events. The system shall translate any type of facts into any type of valid representations. While the Seal would still serve as the end mechanism for verification, it would take significantly more computing resources and potentially worsen inequality if such service is not made accessible to public for free (see LOVN).

The good news is that it seams like such a leap would still take a while and, for now, let’s talk about EvE.

Renaissance EvE

If we go to a restaurant with cravings for Kjøttkaker and Lapskaus without knowing their names, how do we place the orders? (I just googled Norwegian dishes and these came on top.)

I suppose the normal course of action in our new challenge here is to describe to the chef what Kjøttkaker and Lapskaus are like. The former is the Norwegian meatball and the latter means stew; naturally, these two descriptive keywords are what our inquiry will likely build around. Once the chef’s ah-ha moment ends in two hot plates on the table, we can then determine whether they are the Kjøttkaker and Lapskaus we’ve been looking for. “Getting food to the table” awkwardly yet accurately explains what EvE does, as a mechanism under Renaissance for bottom-up identification. From the functional perspective, EvE might resemble some of the technologies on the market that perform “reverse search.” In some way, EvE does share the overall idea, but it differs on the fundamental level; as Dawn is a resource object-oriented system, EvE services DOBJs rather than the system itself, unlike its counterparts. While a platform’s reverse-search algorithms strive to find the best matches for a target, EvE is only interested in determining whether a target could be an instance of a DOBJ. It might be a foreign concept to grasp because, again, we are so accustomed to platform-like systems and platform-based services.

At the moment, EvE is only active for image-based objects or the image data from the multimedia objects as image-based objects. The reasons are manifold. For one, accessing audio from media elements on the front end is strenuous and the size of raw audio data can be intimidating. At the same time, the fact that I still haven’t found a satisfactory strategy to make Seals for audio objects is probably the bigger factor. Following the existing pattern that matches the medium types of the Seal and the underlying subject, an audio object shall be “sealed” by another audio object, for example, a clip of a speech protected by a few bars from Handel’s Water Music, and that has turned out to be a challenging feat. However, there’s also a more nuanced aspect to my lack of motivation. With text and audio types, the algorithms developed and currently used by various search services (Google, Shazam, ChatGPT, etc.) are efficient and sleek. EvE would not have much to offer even though it, again, differs fundamentally. On the other hand, EvE is more valuable for images and sequences of images (videos). Before we move on to discuss EvE for such image objects, here’s a decent summary on the common reverse image search algorithms, some of which will be referenced below for comparison.

Renaissance EvE is neither a machine learning nor a computer vision program; it is not developed to identify or classify objects in a photo or a video frame. Its sole purpose is to “shallowly” and “narrowly” construct a representation layer to identify a DOBJ, with no interest in what the DOBJ actually is. It is thus in a completely different category than most platform-based services running on a volume-driven profit model, of which such reverse search functionality is usually part of a much bigger procedure, object recognition. Given a photograph of a beautiful European robin on a thin twig looking goofy, while Google would fetch back a batch of image files all over the Internet that have a European robin on a thin twig, with the exact copy probably on top, EvE only seeks the one DOBJ producing the “qualia” of that one particular European robin in the entire history of all birds sitting on that one particular thin twig on that particular day looking with the exact amount of goofiness. Compared to methods built on vector embeddings from Deep Learning models, EvE is not as advanced or “smart”; I would even say that it’s static, stiff and stringent. It does not calculate and rank by computed values, such as cosine similarity. Instead, EvE returns boolean results of true or false.

Not our robin but still a goofball
Not our robin but still a goofball. Photo credit: user zv745 at Reddit.

On the other hand, EvE is not as rigid compared to the “naive” algorithms, such as histogram similarity, or the ones based on perceptual hashing, which center on the idea of normalizing the image object and converting it into something that can be processed “naively.” EvE collects information from real-world instances, where unpredictable interferences are abundant. Our goofy European Robin could turn out subtly tinted with a color filter on one website and sharpened with an editor on another. It could also get decorated by a lovely picture frame or an emoji sticker on its head when delivered in an app. EvE would have to tolerate these sorts of aberrations (“attacks”) as long as they do not alter the nature of the DOBJ, our European robin sitting on the thin twig looking goofy. Other times, as we discussed above, if we crop the twig out of the photo, then it becomes a different image object with different qualia, one telling the story about our goofy European robin perking somewhere on a sunny day. A good and valid question here would be what happens if Dawn finds two or more DOBJs of a robin sitting on the thin twig looking goofy. Heraclitus would smirk (if he were not Heraclitus), but it actually points back to one of the axioms of the Dawn system, the Identity of Indiscernibles, which rejects the possibility that two DOBJs could be identical. The axiom has great implications and influences throughout the system; what’s pertinent here is the systemic respect, as a direct result of the axiom, for the individuality and independence of each DOBJ. In other words, resource objects that render identical qualia as an existing DOBJ would not be accepted into Dawn in the first place as the system imposes a preliminary test on discernibility that, in fact, comprises EvE and Seal. However, with individuality, it also goes far beyond that. So far, we’ve been discussing DOBJs as if they were the map of Italy. However, DOBJs are not just about qualia. That photo of European Robin could have been snapped by a photographer, Jeremy Lee, on his costly Canon EOS 90D with the costly Canon EF 75–300mm f/4–5.6 III Lens, which he spent all his savings on, the day after a bad breakup. Although the background information per se won’t make a DOBJ unique, it is part of the DOBJ that is unique and an integral part of the individuality. On a relevant note, meta information as such is secured by Seal EEC separately and usually sent back as part of the response to a successful EvE request.

To accommodate variations, EvE would still pack a non-linear model to power its mechanism, with a feature extraction layer sitting in its front, similar to the common structure adopted in computer vision. When I first started, I was using something resembling kernels as the “extractor” on raw pixels but later on changed to the much simpler edge detection to improve performance. The extracted features are then sent through a select model and translated into a group of internal symbols, becoming the query-friendly representation layer of the DOBJ that acts as its projected identity. EvE does not implement a universal one-model-fits-all strategy, but is rather condition-dependent and environment-aware. Although the visual data of a video is made up of the visual data of every single frame of the video, its representation is not a collection of such groups of single frames. Instead, a different model is developed to process video objects continuously so that my favorite subject, time, can be exploited and reflected in the representation. On another front, separate models are also necessary for different contexts of the instances. When queries are made from natural scenes with image data collected by phones, a significantly higher-degree of distortions ensue. A less accurate and more malleable model is therefore needed because the representation layer prepared by the frugal standard EvE model is too “stiff” to tolerate the discrepancies caused by unpredictable and uncontrollable factors in the natural scenes, from hand trembling to lighting. Despite my ambition and best efforts, I have only been able to put together a clunky model for still images and, frankly, have yet to figure out how to make one that would work for videos without absurd costs.

As of now, the programming (codes) that EvE runs is not made public, which would likely remain so for the near future. I am still sullenly dissatisfied with it. The costs for large videos are almost unbearable and then its symmetrical operation haunts me (honestly, any symmetrical system is unsettling). However, the most significant and definitive part of EvE, as well as Seal, is not what functions in or underneath the system. It is not the algorithms per se or the models that one would assume to be the “outputs” of such algorithms, either. Rather, it is the methodology to pair a natural model with a particular set of algorithms that is at the heart of the operation.

It is the FeelingSafe.

Its hour come around at last

In the field of computer science, models are defined as abstract mathematic representations and translations of real-life objects into such mathematic representations. From the simplest encoding to state-of-the-art transformers, the size, complexity and purpose vary from one model to another, but the assumption that models are to be constructed is shared and unchallenged. Obviously, to not construct a model is an unorthodox approach. As a matter of fact, scheming and programming FeelingSafe felt like driving straight into oncoming traffic completely sober. From time to time, I would even laugh at myself about it and find my very own idea adorably absurd and refreshingly ridiculous. Of course, other than providing some badly needed self-amusement, FeelingSafe does come with skills and, even, gifts.

To understand how FeelingSafe works, we have to first go back to right before the beginning of the Renaissance and visit the Dominican Friar Saint Thomas Aquinas, whose greatest achievement was to reconcile the philosophy of Aristotle and the teachings of Catholic Church. He demonstrated that the prima facie contradictions between the two, rooted in the origin and the infinity of the universe, can be gracefully resolved by reasoning. He convincingly fitted Aristotle’s metaphysics or thinking, in general, into the faith system of the Catholic Church.

It was a milestone, but how is that relevant to FeelingSafe? If we see the Catholic doctrines as an established model, then Aquinas used his reasoning as the algorithm to translate the data of Aristotle’s into the enhanced version that was compatible with the Catholic model. And, well, FeelingSafe works in a similar way, sort of.

I didn’t name the methodology FeelingSafe out of fear of driving in the wrong direction or awe for the Bible. I have a neighbor, Howard, who is an artist. Howard has been painting, drawing and sculpting for over five decades, but in recent years he became more passionate about making intricate scratchboards, which are small black boards he lithographs with white gesso. As his best friend and long-time critic, I am usually the first, if not the only, person to see these boards, whether finished or unfinished. Howard is stubborn, melancholy and cynical while proudly committed to his vision, and making these boards is a lonely never-ending journey that has gradually become one for his life. Although I am not particularly into art that trends towards the decorative end, watching him work on one in a house full of distinctive scratchboards is an amazing and spiritual experience. It always reminds me of the famous lines from William Blake’s Auguries of Innocence that resonate with the teaching of Avataṃsaka Sūtra:

To see a World in a Grain of Sand

And a Heaven in a Wild Flower

Hold Infinity in the palm of your hand

And Eternity in an hour

Earlier this year, I received one of those scratchboards as a thank-you gift; it was the one that I had once casually praised when I was over at his place a few years back. Howard had remembered it since I rarely expressed much interest in his scratchboards, compared to the other works he did. I hung it next to my bed, on the uneven poorly-painted wall that was covered with my random scribbles on work and beyond. I walked past it a few times every day, and about two months ago, when I was resting in bed amid another battle with my tormenting chronic migraine, Howard’s scratchboard captivated me. In a room dimly lit, it was glowing, ineffably sublime. It was the first time that I looked into and beyond the thousands of details on that simple tiny tile and saw millions of possible worlds. Then even beyond the millions of possible worlds, there were billions more waiting to be unlocked in its invisible dimension, the space. I had told him that I sometimes saw him as a Ruth Asawa that couldn’t afford a studio and he was always happy to hear that. Ruth was an artist he admired.

Howard Gross. Feeling Safe, 2015.
Howard Gross. Feeling Safe, 2015.

So, I suppose that, in that moment between bouts of the headaches, Howard’s scratchboard on my wall metamorphosed into something more than a unique artistic arrangement of intricate lines, dots and patterns. It became a path to answers, instead of an answer, and a space for imagination, instead of one particular story. The mightiness is promising. With the right algorithm, it would be the closest thing to holding infinity in the palm of one’s hand, where Aquinas could synthesize Aristotle and the Bible, Howard could weave the sorrow of the past and the hope for the future, and I could compose a new language to tell many things about the world. I took the scratchboard off of the wall and, on its backside, Howard had already titled it, Feeling Safe. That’s where the name, FeelingSafe, comes from. On a side note, I feel obligated to add that this cute backstory leaves out the part about a much deeper and grander connection, one that is beyond the scope of this post or my best writing abilities, one that is reflected in Spinoza’s thinking.

The way FeelingSafe operates may seem counterintuitive and even a bit off-putting (at first). It starts with model selection. Rather than building a model with training data, we would find a “natural” model that is substantially potent and complex. Here, being natural would mean that the model is sourced from an existing, raw and homogenous origin or, as we simply call it, natural events, ranging from literal natural events, such as the historical rainfall data in North America, to completely manmade wonders, like Howard’s unique scratchboards over the past two decades. After a model is selected, it will be transformed into representations that we can work with mathematically and programmatically. While data-rich events like North America’s rainfall history are easy to port, others may take an additional translation layer. For example, how do we convert Howard’s scratchboards hanging on his wall into numerical representations? One way to translate them is to borrow from the mechanism behind QR codes and encode the dot arrangements. Personally, I think it more fun to encode the spacings or the angles. After the model is set up and formatted, it will be implemented to process input data into symbols either as representations or identification cues (for query purposes). Again, dependent on the inputs and the model, such implementation could be simple and direct in some cases, and considerably complicated in others. In particular, video objects would involve multiple steps of preprocessing, which is awfully resource-consuming.

While EvE is obviously an application of FeelingSafe, Seal runs on the same engine as well. In its case, the Renaissance paintings and the poetry are the “natural models,” which work alongside the algorithms that compute on the target resources. I would even argue that Seal is the actual archetype of FeelingSafe while EvE is a variant, even though the concept of FeelingSafe was formulated after Seal was put together. Yet, the two differ vastly on the deployment of the system. With Seal, the raw model can be deployed at the user interface level (carried by clients) and unpacked locally to perform on-demand. Such transparency and client-side control are, unfortunately, difficult to replicate in the case of its sibling, EvE. The computation alone would exhaust personal devices. Nevertheless, as mentioned numerous times before, EvE would work ideally with a decentralized distribution of service nodes. Although I do selfishly enjoy having an opportunity to include my own beliefs in my programs, it isn’t the (major) incentive. Natural models are, in fact, a strategy to keep the size, performance and consumption of an operation bounded and predictable. It would reduce the workload of individual nodes and, more importantly, make synchronization across the distribution much easier. Regrettably, I do not have the wisdom to envision absolute decentralization (e.g., bitcoin), not to mention that I have fundamental problems with that; in fact, Dawn and LOVN are both built with the intention to be deployed at the community level. I will elaborate on this specific subject in a separate post another time when deployment is underway. As of now, the Dawn nodes are still under the direct governance of the “mothership” and preparing for their independence.

Slouching towards… slouching forward to be born

Renaissance is far from finished or anywhere near a state that I could feel relatively content with. Of course, I do wonder whether a computer program could ever reach a state that makes the original author content at all. It always seems endless; you are making something that is not anything you are not making, but then the more you have it there, the less you think there is, sort of like this post — too long to read but too short to write. The bad news for me is that I’ll have to manage and “multitask” the 99 flavors of my discontent as Renaissance, significant as it is, services a bigger goal, the birth and the rise of Dawn.

The name Slouching Towards Renaissance comes from Joan Didion’s masterpiece, Slouching Towards Bethlehem. Famously, Didion took the book title from The Second Coming, a Yeats poem that was so stuck on the writer’s mind those years that she could almost felt the lines “surgically implanted” in her inner ear.

The poem goes like this —

Turning and turning in the widening gyre

The falcon cannot hear the falconer;

Things fall apart; the centre cannot hold;

Mere anarchy is loosed upon the world,

The blood-dimmed tide is loosed, and everywhere

The ceremony of innocence is drowned;

The best lack all conviction, while the worst

Are full of passionate intensity.

Surely some revelation is at hand;

Surely the Second Coming is at hand.

The Second Coming! Hardly are those words out

When a vast image out of Spiritus Mundi

Troubles my sight: a waste of desert sand;

A shape with lion body and the head of a man,

A gaze blank and pitiless as the sun,

Is moving its slow thighs, while all about it

Wind shadows of the indignant desert birds.

The darkness drops again but now I know

That twenty centuries of stony sleep

Were vexed to nightmare by a rocking cradle,

And what rough beast, its hour come round at last,

Slouches towards Bethlehem to be born?

William Butler Yeats wrote it in 1919. Over one hundred years later, it was coded as a Seal in the Renaissance EEC.

A portrait of Yeats by John Singer Sargent. Yeats was also one of Nick Drake’s favorite poets and his influence is evident in his lyrics.
A portrait of Yeats by John Singer Sargent. Yeats was also one of Nick Drake’s favorite poets and his influence is evident in his lyrics. Nick is the inspiration for Dawn.

Lately, Didion and her prose have been swinging vigorously and unrhythmically in my brain. I picture her writing frantically under an antique lamp, holding a cigarette. I picture her drinking and laughing at a poorly-written newspaper piece by the window, holding a cigarette. I picture her smoking that cigarette anxiously in a cheap motel room in the middle of a California desert. I picture her cursing and speeding on the LA highway basking in the light of the setting sun, still with that cigarette between her slender fingers. My wildest self-diagnosis is telling me that perhaps I am on the edge of it, that I am scrambling for an escape again after all these years. Only this time, things might be a little different. After all, I have a beast to wait on, to slouch towards Bethlehem together to be born.

p.s. I don’t smoke.