Message Boards Message Boards

Meditation Culture

Is anyone here into machine learning?

Toggle

RE: Is anyone here into machine learning?
Answer
11/3/17 10:39 PM as a reply to P K.
Generally with machine learning there needs to be a defined goal, or "fitness" indicator you can select for. Without that, the genetic algorithms or whatever you're using won't have a way of knowing if that particular permutation is successful or not.

All you need to do is clearly define enlightenment so that the algorithm can calculate whether this randomized permutation is "more enlightened" than the last one. Good luck with that.   emoticon

RE: Is anyone here into machine learning?
Answer
11/4/17 4:02 PM as a reply to P K.
Personally I am learning way more about my mind from those monks that I have in several decades with math and computers.  A few of my projects at work over the last few years have been ML projects.  Indeed my current project is an ML project implementing a online algorithm trying to learn subtle things about the size of the internet from incomplete observations.  I do come at this from perspective of one trained in mathematics; I see this stuff through the lens of statistics and am admittedly skeptical of the biological analogies (mostly just marketing to get research dollars).   Disclaimer: I was a shitty mathematician but I am a good programmer.

I think that some of what you allude to are just meta principles that are much bigger than ML.  Such as behavior of systems is related to minima/critical points of functions (e.g. action/Lagrangians in physics, fitness functions in genetic algorithms).  Non-equilibrium systems (e.g. control systems) can sometimes be thought of as iteratively chasing minima.  Super duper important: complex systems usually have many critical points (generalization of local minima) and when computing with them there is usually some of a-priori information imposed (regularization) to pick out critical points.   These ideas are quite old.  Thinking of any process  in this meta-framework is likely to feel natural.  Just don't put too much emphasis on more specific details (e.g. gradient descent comes up because it is the computational simplest/cheapest way to iteratively compute a minimum; higher order methods such as Newton's method are better in a certain mathematical sense but not with today's silicon - who knows what a biological system might do to iteratively to find a minimum).

So ML has these meta-characteristics and has a history of using biological analogies so it feels right.  On the other hand pick up an ML model and you'll discover its a toy; albeit a somewhat flexible toy carefully designed by human intelligence.  ML models are designed (and trained) by humans to solve very specific tasks.  There flexibility is usually limited to tuning a number of parameters (for a modern deep learning network I think that may be millions).  To my knowledge, the idea of building a "general intelligence" via ML techniques is not even on the table (note I am talking about someone literally trying to build general intelligence in silicon not just philosophizing about the possibility - the latter is all over the place as well as things like the simulation hypothesis).  Alpha Go sounds like it points in this direction mind you (I have not paid real attention to it) but I'd still say learning to play Go is WAY far away from general intelligence (e.g. learning about how learn to play Go, learning with inputs are relevant to the task of playing Go etc).  Note even genetic algorithms suffer the same lack of generality (a human specifies a model with some finite number of parameters, a human defines a probability distribution for those parameters, a human defines a fitness function that is used to decide which of the current samples to kill and which to replicate).  

I don't actually think about this stuff too much, but I think the most interesting aspect of human intelligence is therefore the priors (regularizations) in our brains.  Somehow I doubt the "brains regularize with Bayesian priors; full stop" position that seems to come up in rationalist circles.  I think that it is very possible that the priors in our brains are uniquely well encoded in our biology and that capturing the same structure/information in silicon may crazy crazy hard.  Evolution could have made us out of silicon but it didn't.  Ultimately a question of biology and physics well beyond my pay grade. 

Unrelated to ML but directly related to the question of "are humans just Turing machines" is the following.  I suspect you'd enjoy this (I've been meaning to reread it) : https://arxiv.org/pdf/1306.0159.pdf

An example of a more-critical-than-you'd-expect-given-the-hype assessment of deep learning: https://arxiv.org/pdf/1412.6572.pdf  Money quote:  "The existence of adversarial examples suggests that being able to explain the training data or even being able to correctly label the test data does not imply that our models truly understand the tasks we have asked them to perform."  Ouch!

RE: Is anyone here into machine learning?
Answer
11/8/17 6:33 PM as a reply to P K.
I am no expert in this field but I sometimes amuse myself with using deep learning software to e.g. find most probable continuation of the Bible, to say a dog's breed from its photo etc. I think about an application to say if a woman is dominant or submissive in the bed. I know it is very non-buddhist. emoticon

RE: Is anyone here into machine learning?
Answer
11/9/17 1:54 PM as a reply to Alesh Vyhnal.
To be honest I don't really grasp the buddhism obsession with sexuality. I seem to have the opposite problem: I am taking medications to lower blood pressure, antidepressants, neuroleptics and benzodiazepines which all anihilate libido. And so I finally quite easily just by taking pills defeated the Mara, the Evil One. ;) 

RE: Is anyone here into machine learning?
Answer
11/9/17 1:59 PM as a reply to Alesh Vyhnal.
But please don't pitty me. I stil have some forms of intimacies with my beloved wife and watch porn. It would perhaps be sad if I didn't. emoticon 

RE: Is anyone here into machine learning?
Answer
11/9/17 6:02 PM as a reply to Alesh Vyhnal.
My beloved brother! I thank you wholeheartedly that you have corrected my wrongdoing. My heart is now filled with love and joy since after decades of quandaries and tribulations I am finally on the right way. My exhilaration is so overwhelming that I would love to ... your ... just now. emoticon 

RE: Is anyone here into machine learning?
Answer
11/9/17 9:56 PM as a reply to Alesh Vyhnal.
You say that I should conquer my tongue and my genitals but I dare to correct your holy zeal since you forget to mention the sinful anus, me being a big fan of pegging. ;)


Yet it is true that we read in (AN, 9, sutavá, Bhikkhu Bodhi, p.1259): "(1) [An arahat] is incapable of intentionally depriving a living
being of life; (2) he is incapable of taking by way of theft what is not given; (3) he is incapable of engaging in sexual intercourse; (4) he is incapable of deliberately speaking falsehood; (5) he is incapable of storing things up..."

I think that retaining the sperm is roughly equivalent to retaining the stool or urine. And now imagine that instead of (3) there would be: "An arahat is incapable to defecate". Just imagine it: The result of all the lifelong striving and meditations: Chronic constipation! emoticon

(I beg for pardon all those who I might have offended.)

RE: Is anyone here into machine learning?
Answer
11/10/17 10:49 AM as a reply to Alesh Vyhnal.
Thank you for your beautiful poem. emoticon I think that it is much greater defilement to bother oneself with some aggregates of pixels that I see on my monitor than to have sex with a prostitue. emoticon

RE: Is anyone here into machine learning?
Answer
11/21/17 4:35 PM as a reply to P K.
Set your dropout percentage to 100%.

RE: Is anyone here into machine learning?
Answer
11/22/17 6:23 AM as a reply to Alesh Vyhnal.
Angra Mainyu:
it is hard to not behave unskillfully because mind is wired in a way for us to be greedy and indulgence in anything pleasant immediately start rewiring mind in a way that make lots of false directions which seems to point to something good, liberation, relief, etc. but point to hell mind sew for itself

rule is that if something is very unskillful then you need to do without it completely, rethink reality without using it
Do you have a particular way of removing unskilful things? Just notice them?

What if removing unskilful things is unskilful?