Bright Planet, Deep Web

By: Sam Vaknin, Ph.D.


Malignant Self Love - Buy the Book - Click HERE!!!

Relationships with Abusive Narcissists - Buy the e-Books - Click HERE!!!


READ THIS: Scroll down to review a complete list of the articles - Click on the blue-coloured text!
Bookmark this Page - and SHARE IT with Others!


Go Back to "Digital Content on the Web" Home Page!


www.allwatchers.com and www.allreaders.com are web sites in the sense that a file is downloaded to the user's browser when he or she surfs to these addresses. But that's where the similarity ends. These web pages are front-ends, gates to underlying databases. The databases contain records regarding the plots, themes, characters and other features of, respectively, movies and books. Every user-query generates a unique web page whose contents are determined by the query parameters. The number of singular pages thus capable of being generated is mind boggling. Search engines operate on the same principle - vary the search parameters slightly and totally new pages are generated. It is a dynamic, user-responsive and chimerical sort of web.

These are good examples of what www.brightplanet.com call the "Deep Web" (previously inaccurately described as the "Unknown or Invisible Internet"). They believe that the Deep Web is 500 times the size of the "Surface Internet" (a portion of which is spidered by traditional search engines). This translates to c. 7500 TERAbytes of data (versus 19 terabytes in the whole known web, excluding the databases of the search engines themselves) - or 550 billion documents organized in 100,000 deep web sites. By comparison, Google, the most comprehensive search engine ever, stores 1.4 billion documents in its immense caches at www.google.com. The natural inclination to dismiss these pages of data as mere re-arrangements of the same information is wrong. Actually, this underground ocean of covert intelligence is often more valuable than the information freely available or easily accessible on the surface. Hence the ability of c. 5% of these databases to charge their users subscription and membership fees. The average deep web site receives 50% more traffic than a typical surface site and is much more linked to by other sites. Yet it is transparent to classic search engines and little known to the surfing public.

It was only a question of time before someone came up with a search technology to tap these depths (www.completeplanet.com).

LexiBot, in the words of its inventors, is...

"...the first and only search technology capable of identifying, retrieving, qualifying, classifying and organizing 'deep' and 'surface' content from the World Wide Web. The LexiBot allows searchers to dive deep and explore hidden data from multiple sources simultaneously using directed queries. Businesses, researchers and consumers now have access to the most valuable and hard-to-find information on the Web and can retrieve it with pinpoint accuracy."

It places dozens of queries, in dozens of threads simultaneously and spiders the results (rather as a "first generation" search engine would do). This could prove very useful with massive databases such as the human genome, weather patterns, simulations of nuclear explosions, thematic, multi-featured databases, intelligent agents (e.g., shopping bots) and third generation search engines. It could also have implications on the wireless internet (for instance, in analysing and generating location-specific advertising) and on e-commerce (which amounts to the dynamic serving of web documents).

This transition from the static to the dynamic, from the given to the generated, from the one-dimensionally linked to the multi-dimensionally hyperlinked, from the deterministic content to the contingent, heuristically-created and uncertain content - is the real revolution and the future of the web. Search engines have lost their efficacy as gateways. Portals have taken over but most people now use internal links (within the same web site) to get from one place to another. This is where the deep web comes in. Databases are about internal links. Hitherto they existed in splendid isolation, universes closed but to the most persistent and knowledgeable. This may be about to change. The flood of quality relevant information this will unleash will dramatically dwarf anything that preceded it.

Brussels Morning: Why EU Should Ban ChatGPT

ChatGPT is a generative artificial intelligence agent that is based on a large language model (LLM) and is able to convincingly emulate human discourse to the point of passing the Turing test (becoming indistinguishable from human sentience).

 

Access to ChatGPT is public (subject to free registration). It integrates with the Internet via a plug-in. Leading search engines such as Google and Bing have added it to their offerings, giving their users the distinct impression that it is just another way of providing reliable answers to their search queries.

 

ChatGPT is likely to dominate search engines soon for three reasons:

 

1. Its output is in the form of digestible, bitsize text capsules, eliminating the tedium of having to scroll through dozens of search results and having to click on the links;

 

2. It appeals to authority by expressly claiming to have access to billions of documents; and

 

3. Text is always perceived as way more definitive than visuals or audio.

 

Should this transpire, it would portend an ominous scenario. ChatGPT gets its answers wrong more often than not and when it does not know the answer, it “hallucinates”: confabulates on the fly. In short: it lies very often and then grandiosely refuses to back down.

 

The makers of this monstrosity claim that it is in counterfactual error only “occasionally”. That is untrue. Even the most friendly research estimates are that it hallucinates about 20% of the time. The real figure is way higher.

 

Recently, Geoffrey Hinton, the AI pioneer, has confirmed this risk posed by ChatGPT in a wide-ranging interview following his resignation from Google. He warned against imminently being swamped with fake information, false news and images, and of being unable to tell true from false.

 

Moreover: phrase the same query differently and you are bound to obtain an utterly disparate response from ChatGPT!

 

I posed 55 factual questions about myself to ChatGPT. My questions revolved around facts, not opinions or controversies: where was I born, where do I reside, who is my sister, these kinds of basic data.

 

The correct answers to all my questions are easily found online in sources like Wikipedia, my own websites, interviews in the media, and social media. One click of a button is all it takes.

 

ChatGPT got 6 answers right, 12 answers partly right, and a whopping 37 answers disastrously wrong.

 

It was terrifying to behold how ChatGPT weaves complete detailed fabrications about my life, replete with names of people I have never even heard of and with wrong dates and places added to the mix to create an appearance of absolute conviction and authority!

 

This is way more dangerous than all the fake news, disinformation, and conspiracy theories combined because ChatGPT is erroneously perceived by the wider public as objective and factual - when it is neither, not by a long shot.

 

The EU needs to adopt urgent steps to stem this lurid tide before ChatGPT becomes an entrenched phenomenon, especially among users who are gullible, ill-educated, young, or conspiracy-minded:

 

1.     If the creators of ChatGPT continue to refuse to fess up to the abysmal rate of correct answers afforded by their prematurely unleashed contraption, they should be made amenable to defamation and libel laws;

 

2.     The makers of ChatGPT should be compelled to publish timely and comprehensive statistics about usage and veracity rates; and

 

3.     ChatGPT is an ongoing research project. It should be banned from the public sphere and from search engines.

More generally, the EU should tackle the emerging technologies of artificial intelligence and their ineluctable impacts on the job markets, education, activism, and the very social fabric. Legal and regulatory frameworks should be in place when the inevitable encounter between man and machine takes shape.

 

AI is a great promise. But it must be regarded with the same wariness that that we accord technologies like cloning or genome (gene) editing.

 

Rigorous regulation should prohibit any deployment of AI applications unless and until they have reached a level of stability, fidelity, and maturity tested in laboratories over many years in the equivalent of the rigorous clinical trials that we insist on in the pharmaceutical industry.

 


Copyright Notice

This material is copyrighted. Free, unrestricted use is allowed on a non commercial basis.
The author's name and a link to this Website must be incorporated in any reproduction of the material for any use and by any means.


The Internet Cycle

The Internet - A Medium or a Message?

The Solow Paradox

The Internet in Countries in Transition

The Revolt of the Poor - Intellectual Property Rights

How to Write a Business Plan

Decision Support Systems

The Demise of the Dinosaur PTTs

The Professions of the Future

Knowledge and Power

(Articles are added periodically)


Visit my other sites:

World in Conflict and Transition

The Exporter's Pocketbook

Portfolio Management Theory and Technical Analysis Lecture Notes

Microeconomics and Macroeconomics Lecture Notes

Malignant Self Love - Narcissism Revisited

Philosophical Musings

Poetry of Healing and Abuse: My Poems

FREE - Read New Short Fiction (Hebrew)


Feel free to E-Mail the author at  palma@unet.com.mk
or at narcissisticabuse-owner@yahoogroups.com