|LLM|TRANSFORMER|FOUNDATION MODEL|GRAPH|NETWORK|
“If the muse is stable, every thing else will comply with.” – Unknown
“The loftier the constructing, the deeper should the muse be laid.” – Thomas à Kempis
Basis fashions have modified synthetic intelligence in recent times. A basis mannequin is a mannequin educated with large quantities of knowledge (often by unsupervised studying) that may be tailored to totally different duties. Fashions corresponding to BERT or GPT led to a revolution through which one mannequin might then be tailored for all duties in a site, simplifying AI entry and decreasing the necessity for knowledge for a single process. We have now basis fashions for textual content and different modalities, however for modalities corresponding to graphs and tabular knowledge, we don’t. On this paper we focus on why we do not need a basis mannequin for graphs and the way we’d get one, particularly, we’ll reply these questions:
- Why do we would like a basis mannequin for graphs? Why can we not have one?