Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Usually when we have something like numbers (objects) and something like addition (an operation) we're accustomed to getting various rules that apply.

For example,

a + b = b + a

Also,

a + (b + c) = (a + b) + c

These work for addition and in fact for many other things we can do with numbers that we care about. But mathematicians also study objects and operations where some of these rules don't apply.

A somewhat famous example is about rotating physical objects in space. In that case, it matters what order you do the rotations in ("turn and flip" doesn't end up the same as "flip and turn"!). So if "a" and "b" referred to rotations in space and "+" referred to doing them one after another, then

a + b ≠ b + a

in some cases.

Universal algebra is about thinking about all of these kinds of rules and how they relate to each other, for any kind of objects and any kinds of operations. So numbers are one kind of object and addition is one kind of operation, but there can be infinitely many kinds of objects and infinitely many kinds of operations, and some will follow certain patterns and others won't.

But in some cases, objects and operations that follow certain patterns (sometimes called "laws") automatically have to follow other patterns.

To take one example, if

a + b = b + a

and

a + (b + c) = (a + b) + c

then it is necessarily also true that

a + b + c = c + b + a

Some of these patterns have been very extensively studied because they recur over and over again, or because they apply to things like numbers (and rotations) that we care about a lot in relation to things we frequently encounter in life (and mathematics and computer science). But other patterns may be obscure and not that well-studied.

If you hypothesize some laws about ways of combining objects, you might ask which other laws are necessarily implied (or necessarily impossible) as a result of those laws. Tao noted that there are actually thousands of simple-to-state laws and so a research project is to figure out the logical status of how all of them relate to all of the others. Which ones require others to be true? Which ones prevent others from being true? Which ones are independent of others, so they might be true or not?

There are some charts about familiar classifications of these structures according to certain laws

https://en.wikipedia.org/wiki/Magma_(algebra)#Types_of_magma

but Tao is sort of suggesting that this classification is just getting started, and we can learn a lot more about how structures relate to each other.

I should note that he is limiting this to "equational axioms" which means that he's not considering some of the kinds of rules that are often considered in such studies (those containing constants, not just variables). For example, when dealing with numbers and addition there is a number 0, which doesn't change other numbers when added to it. This fact causes lots and lots of nice theorems about arithmetic to work out. There is similarly a "non-rotation" in rotations which doesn't change the position of objects when you "rotate" them by not moving them. That also causes some of the same theorems that would apply to arithmetic with numbers to apply in that case!

Tao's project is, at least for the time being, not considering rules like "there is an object that doesn't cause a change when applied to other objects" (the "law of identity", which is represented by a downwards blue arrow in that Wikipedia article). These rules are important in mathematics, but I guess it's more complicated to consider how they do or don't relate to one another, so this project will simply ignore them, and only consider laws that can be stated only with variables.

Does that help a bit?

I guess another point is just that it's been very common in mathematics to try to consider things at higher and higher levels of abstraction, in order to find theorems that work for many different situations. For instance, Boolean algebra like in computer logic has objects 0 and 1, and operations OR and AND (and perhaps XOR). It turns out that we can interpret Boolean algebra with XOR and AND as a previously-discovered mathematical structure called a finite field

https://en.wikipedia.org/wiki/Boolean_algebra#Values

... whereupon tons of theorems about fields immediately automatically apply to Boolean algebra, even though perhaps those theorems weren't originally conceived of as relating to Boolean logic at all. So part of universal algebra is like "if we prove as much as we can with as few assumptions as possible, we can discover results that will readily apply to lots of new situations in the future". What Tao is proposing to do is a part of that undertaking, again at a super-high level of generality.



Not the original asker, but I just wanted to appreciate this explanation-simple, concise, but still maintaining a one-to-one correspondence with the actual mathematics. Thank you!


> Does that help a bit?

Holy smokes thank you!!! Yeah that makes much more sense now. Thank you for taking the time to write all that up!

Would Godel’s incompleteness theorem throw a monkey wrench into some of this effort? Probably not Tao would have already thought of that I guess.


There is no a priori reason why the aimed for results should be unprovable ala Gödel.

However, it turns out it has been proven that it is. Tao says so in the post:

> I will remark that the general question of determining whether one set of equational axioms determines another is undecidable.

Even though the general problem is undecidable, individual instances are still potentially solvable.


I remark that it's the "several equations -> several equations" that's undecidable

this project focuses only on "1 eq -> 1 eq", so it's even more of a gray area


Awesome thanks for clarifying


A really good question! I've been out of the field for a long time, but it's important to stress the difference between "interactive proof assistants" and "automated theorem provers." The latter consist in various software tools that, given a well-formed logical formula, attempt to automatically derive a proof thereof in a given logic. There are of course well-understood limits to such tools given the foundational results of Gödel, Turing etc in the early 20th century.

On the other hand, interactive proof assistants such as Lean, mentioned in the title article, or Isabelle are really something more like functional programming languages in which a human mathematician can specify and prove a given theorem, relying on the work of other mathematicians as equivalent to a third-party library. Again like with programming languages, the different tools will offer various trade-offs between expressibility and the kinds of guarantees that can be made at compile time about the validity of a proof. It has traditionally been viewed as far too time consuming to actually specify even intermediate mathematics in such tools, but LLMs really change the situation, as Terrance Tao in particular has demonstrated.

For anyone interested in the topic, I highly recommend the work of the CMU logician Jeremy Avigad. See, for example: * Formally Verified Mathematics- https://www.andrew.cmu.edu/user/avigad/Papers/cacm.pdf * Automated Reasoning in Mathematics- https://link.springer.com/chapter/10.1007/978-3-031-63498-7_... * The Mechanization of Mathematics- https://www.ams.org/journals/notices/201806/rnoti-p681.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: