machinelearning

Esta revista es de un servidor federado y podría estar incompleta. Explorar más contenido en la instancia original.

KingsmanVince, en Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models
@KingsmanVince@kbin.social avatar
KingsmanVince, en Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing
@KingsmanVince@kbin.social avatar
nsa,

Please don't post links to reddit.

KingsmanVince,
@KingsmanVince@kbin.social avatar

I know we are moving away from Reddit. However, if I don't link, I feel like we may miss out good threads on r/machinelearning. Moreover, the authors don't only post arxiv links, they post other sutff such as Summary, Key points, ... (e.g this).

So can I at least put them in the posts instead of posting in a comment?

Lenguador,
@Lenguador@kbin.social avatar

I find the link valuable. Despite the proliferation of AI in pop culture, actual discussion of machine learning research is still niche. The community on Reddit is quite valuable and took a long time to form.

nsa,

If there isn't any discussion on reddit (no discussion in this case), I don't see a reason to link to reddit; you can just link to the project page. That said, if you think there is important discussion happening that is helpful for understanding the paper, then use a teddit link instead, like:

https://teddit.net/r/MachineLearning/comments/14pq5mq/r_hardwiring_vit_patch_selectivity_into_cnns/

KingsmanVince,
@KingsmanVince@kbin.social avatar

I will follow then.

nsa,

That's appreciated!

nsa, en Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models

It seems like for creative text generation tasks, metrics have been shown to be deficient; this even holds for the new model-based metrics. That leaves human evaluation (both intrinsic and extrinsic) as the gold standard for those types of tasks. I wonder if the results from this paper (and other future papers that look automatic CV metrics) will lead reviewers to demand more human evaluation in CV tasks like they do for certain NLP tasks.

SSamDav, en Extending Context Window of Large Language Models via Positional Interpolation

One cool thing about this work is that there was a concurrent discussion in twitter about the proposed method. From different authors.

nsa,

do you have a link?

miro, en Extending Context Window of Large Language Models via Positional Interpolation

Is this similar to what MPT did to extend its context length?

nsa,

hmmm... not sure which model you're referring to. do you have a paper link?

Blaed,

I believe it's a different technique (at least far as I understand the topics).

According to Mosaic, MPT (i.e. MPT-7B-StoryWriter-65k+) uses a different underlying architecture which enables their long context lengths.

The original author of this new method (SuperHOT by kaiokendev) shares what he has learned about this method here:

nsa, en VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks

Also reminds me of this ICLR paper: Linearly Mapping from Image to Text Space.

ragnarokonline, en r/MachineLearning finally received a warning from u/ModCodeOfConduct

Got eem

KingsmanVince, en Machine Learning Beginner Info/Resources
@KingsmanVince@kbin.social avatar

I also want to share some resources.
For Pytorch,

For TPU,

nsa, en The Curse of Recursion: Training on Generated Data Makes Models Forget

If the effect is strong enough, then it could have a very negative effect on LLM training in the near future, considering more and more of the internet contains ChatGPT & GPT-4 content in it and automatic detectors are currently quite poor.

Deliverator,
@Deliverator@kbin.social avatar

Yeah it does not portend well for the future, especially combined with the current explosion of low quality, profit driven content. I fear if left unchecked we could approach some kind of Kessler Syndrome-style scenario where desire for rapid growth and profit will poison the well in the long term. "Garbage in, garbage out"

  • Todo
  • Suscrito
  • Moderado
  • Favoritos
  • random
  • noticiascr
  • machinelearning@kbin.social
  • CostaRica
  • Todos las revistas