_supert_

joined 10 months ago
[–] _supert_@alien.top 1 points 10 months ago

Thanks. The website was a rush job - I'll improve it.

[–] _supert_@alien.top 1 points 10 months ago (1 children)

Thank you for trying it. Is it enjoyable?

Yes, I've only opened one port / world so far.

hhgg is the name of the world. 7,-2 is the coordinates. 47 is turns, 10 is a very very rough first attempt at a score.

The "Uncover the truth..." is the model's attempt to generate a quest. You are right that there's no /quests command; it's a good idea and I might add one.

[–] _supert_@alien.top 1 points 10 months ago (1 children)

That's really nice. I was aiming for something similar with llama farm but you've done a much better job with editor integration.

[–] _supert_@alien.top 1 points 10 months ago

To make it worse, I had not forwarded the port properly. Please try again.

[–] _supert_@alien.top 1 points 10 months ago

I had not forwarded the port properly. Please try again.

[–] _supert_@alien.top 1 points 10 months ago

I had not forwarded the port properly. Please try again.

[–] _supert_@alien.top 1 points 10 months ago (3 children)

Thanks for checking the documentation.

That is also not the port being served. It's one port per world, and the worlds are defined at https://chasm.run/worlds

[–] _supert_@alien.top 1 points 10 months ago

That's a good question. At the moment the client handles display, formats markdown etc. Perhaps being able to connect with telnet would be convenient. Secure authentication would be complicated somewhat.

[–] _supert_@alien.top 1 points 10 months ago

Plenty of room for collaborators / contributions.

 

My fellow digital explorers,

In the spirit of discovery and innovation that has long defined our community, I stand before you today to extend an invitation of both challenge and opportunity. As we stand on the threshold of a new frontier of nonhuman intelligence, there can be only one possibility on everyone's mind: digital waifus. Instead, I want to talk about interactive entertainment, where your unique skills and insights are needed more than ever.

Today, I am proud to announce the beta launch of Chasm, an FLOSS multiplayer text adventure game - a game that harnesses the power of advanced Llamas to create an immersive narrative world unlike any we have known before.

We choose to launch the game server... We choose to launch the game server in this decade and do the other things, not because they are easy, but because they are hard...ware intensive.

Yet, such a world cannot be realized by one person alone. It requires the collective effort, the diverse perspectives, and the passionate engagement of you - our community's finest.

As the server host and grand chief architect of this digital venture, I urge you to join me in this critical testing phase. Our mission is to delve into the depths of this lexical metaverse, to test the limits of synthetic linguistic creations, and to provide the feedback necessary to refine and perfect our shared experience.

The journey will be one of camaraderie, discovery, and the occasional encounter with large and dangerous bugs. But together, there is no error we cannot rectify, no narrative maze we cannot navigate.

This is not a drill, dear citizens of /r/localLlama! I'm putting out the call for the bravest, the boldest, the ones who look at a wall of text and see not a daunting block of words, but a grand canvas for adventure and untold stories.

So I ask you, will you step forward and answer the call? Will you be one of the few, the proud, the beta testers who will chart the course for a new era of text adventure gaming?

If you are ready to be part of this historic endeavor, please install and run the client to join our ranks in the firmament. And let me know below.

The future awaits, and it is the Llama's to write.

Ad astra per aspera,

supert

tl;dr

[–] _supert_@alien.top 1 points 10 months ago (1 children)

It is very good. Synthia and Euryale are also very good.

For 13Bs Mythomax, Chrono-Hermes and Tiefighter are cool.

[–] _supert_@alien.top 1 points 10 months ago

Can it be merged with llama?

[–] _supert_@alien.top 1 points 10 months ago
 

Sorry to post hardware problems but I think this is the community most likely to have dealt with similar issues.

Basically I get hardware freezing during inference.

  • EVGA X299 FTW-K
  • Intel(R) Core(TM) i9-9900X CPU @ 3.50GHz
  • 128GB DDR4 RAM
  • 1x nvidia RTX 3090
  • 1x nvidia RTX 4090
  • Corsair HX1500i Power Supply
  • two samsung nvme flash drives (one for root, one for swap)
  • two 5.5TB HDs (for backups)
  • a fractal design case with fans in every available slot
  • water cooling for CPU (kraken?)
  • various USB peripherals (topping E30, UMC202HD, Nitrokey, phone, kb, flash etc)

This is a personal workstation used for occasional LLM work with python cuda 11 libraries. I run linux (void) with a recent kernel (6.5.9_1) and nvidia-535 driver.

Often, the machine will freeze under GPU load when both GPUs are being used (i.e. LLM with layers split across both cards). This has kept happening after kernel updates and for various cuda versions. It's stable under gaming loads (e.g. Far Cry 5/6 on either card). No error or kernel messages and nothing logged to dmesg. Just a hard freeze.

I've tried switching the order / PCIe slots that the GPUs are using and removing other cards.

The CPU temperature is generally below 90C under full CPU load. If I fully load the CPU it doesn't freeze. I stopped overclocking.

The GPU temperatures stay reasonable (60C under load) although the 3090 feels quite hot (they are reported to run a bit warm). Under load they are at about 75% capacity, I presume because the bottleneck becomes the PCIe 3x bus, which I vaguely suspect may be the root of the instability.

I'm out of ideas, if anyone could suggest things to try I'd be grateful.

view more: next ›