<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Capsul Devlog]]></title><description><![CDATA[this is where we post progress updates about capsul💊, our tiny lil cloud <3]]></description><link>https://blog.breaksoftware.xyz/</link><generator>Ghost 3.41</generator><lastBuildDate>Thu, 26 Feb 2026 21:31:27 GMT</lastBuildDate><atom:link href="https://blog.breaksoftware.xyz/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[returning to the project, and the capsul dispenser]]></title><description><![CDATA[Now Bellsprout shows up and asks Tangela, "Hey, I want to learn how to do this too! Can I please run a capsul on your home server as well?"  Bellsprout creates a new capsul on Tangela's home server.]]></description><link>https://blog.breaksoftware.xyz/blog/returning-to-the-project-and-the-capsul-dispenser/</link><guid isPermaLink="false">6906ae993296b100014c2823</guid><dc:creator><![CDATA[Forest Johnson]]></dc:creator><pubDate>Sun, 02 Nov 2025 01:08:05 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="returningtotheproject">returning to the project</h3>
<p>It's been half a year since our last post,  j3s has been busy moving to a different city, and I fell off the pattern of working on capsul regularly.  Too many other things were taking up my attention, and I was struggling to muster up motivation.   Initially, we had planned to rebuild all the current <code>capsul.org</code> features first before we started implementing anything new and exciting.</p>
<p>This approach may have gotten me into trouble with my own motivation;  why build another &quot;pay money, recieve VM&quot; system when there are already so many?  I've had some conflict about this with capsul since the very beginning; fueled by desire to build something that doesn't currently exist, transformational tech. Simply re-writing capsul would be familiar ground with familiar issues. (centralization, etc.)</p>
<p>So from now on, my work will continue only as it can be fueled by dreams and desires.  If not a clone of capsul.org, then what exactly <em>are</em> we building?</p>
<h3 id="10yearsago">10 years ago</h3>
<p>My original motivation to work on capsul came from an idea I had when I was a lot younger... Frustrations about the way the internet was built, and an urgent imperative to effectively paper it over, to try to make the frustrating reality disapper.  Ideally, to eventually establish something permanent which could work for everyone forever.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: html-->
But in my defense, all I wanted was a way to set up a computer so it can publish to the internet directly, just as easily as it can consume stuff from someone elses server on the internet. <br>
Is that so much to ask? That's how it worked on the internet originally <span style="font-family: monospace; white-space:pre; font-size:1.2em">¯\_(ツ)_/¯</span>
<!--kg-card-end: html--><!--kg-card-begin: markdown--><p>The classic: Got a problem caused by tech? Ope. Ya better create more tech to solve it!</p>
<p>I had already started a project to try to do the &quot;create more tech to solve it&quot; thing mere months after I started working on capsul back in 2020. In a way, I think seeing how capsul had some success inspired me to move forward with that project as well (called <a href="https://sequentialread.com/greenhouse-retrospective-and-future/"><code>greenhouse</code></a> at the time)</p>
<p>At first, j3s wasn't interested in it.  Probably at least in part because my plans at the time were so expansive, much more so than what I ever wrote about on my blog.  But as we've returned to the capsul project, considered how it went and what that means for:</p>
<ul>
<li>us,</li>
<li>for <a href="https://cyberia.club">cyberia</a>,</li>
<li>and for all the other folks who SSH into VMs running on <a href="https://picopublish.sequentialread.com/files/rathouse.mp4"><code>rathouse</code></a>...</li>
</ul>
<p>We've both concluded that what capsul does today is great, but it's not quite enough for the interested parties: We want it to be easier to run at home, for it to be more than just a service powered by a server in a rack located far away from us.  So we eventually settled on integrating <code>greenhouse</code>'s <a href="https://git.sequentialread.com/sqr/threshold/media/branch/multi-tenant/readme/diagram.png">TLS SNI routing and TCP reverse-tunneling</a> to enable easier publishing to the internet through a gateway.</p>
<p>Now there's only one problem. What are we going to call this new feature?</p>
<h3 id="thecapsuldispenser">the <code>capsul dispenser</code></h3>
<p><strong><code>capsul-dispenser</code></strong> refers to this new network tunnel implementation. There are two main reasons for it. First, we want multiple capsuls to be able to share an IP address, because not everyone needs one, and they're only getting more expensive.  This also enables capsul to be run on a home network which only gets one IP address. Further, it allows us to make capsul even easier to set up on a home network; port-forwarding and dynamic DNS become optional instead of required.</p>
<p>I created this comic strip to try to explain it:</p>
<p><img src="https://picopublish.sequentialread.com/files/capsul-comic3.avif" alt="Tangela creates an account on capsul.org. &quot;I want to publish a website or app to the internet&quot;, they say. Then they create a virtual machine on there. Bellsprout and Oddish show up (more and more users), and they say, &quot;This is a cool website or app, we want to use it.&quot;  The capsul vm is sweating.  Tangela upgrades the capsul vm to a larger size. Now the capsul isn't sweating anymore, but Tangela is. &quot;Oh dear, paying for all of this compute is getting expensive!&quot;, they say, &quot;But I can't let my friends down!&quot;  Next, capsul.org says &quot;We have a new feature. Run on your own server.&quot; After thinking for a second, Tangela responds, &quot;I've enjoyed hosting for my community so far. I'll give it a try with a $35 fanless computer from eBay&quot;.  Then Tangela installs the capsul cog software onto the home server, and adds thier server config to thier capsul.org account. A Network Tunnel is created, from the capsul.org cloud into the home server. Next, Tangela migrates the capsul vm from capsul.org onto the cog on the home server. &quot;Phew, With dedicated hardware, my popular app is running smooother, with less monthly cost!&quot;, they say. Now Bellsprout shows up and asks Tangela, &quot;Hey, I want to learn how to do this too! Can I please run a capsul on your home server as well?&quot;  Bellsprout creates a new capsul on Tangela's home server.  &quot;This is so cool, I'm going to start my own capsul hub too, so I can help other people host home servers someday,&quot; Tangela says.  Tangela registers the my-local-hub.org domain name, sets up dynamic DNS, installs the capsul hub software, and configures port forwarding on thier home router.  Bellsprout installs the capsul cog software on thier home server, and migrates thier capsul vm  to it.  A network tunnel is created from my-local-hub.org to Bellsprout's home server. And thus, the process can repeat, again and again...  From an experiment in a VM to a community resource. From a single-tenant home server to a multi-tenant one. And an insulated node stuck behind a NAT, to a new public service provider on the global internet."></p>
<p>-- <a href="https://sequentialread.com">forest</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[we're rewriting capsul in golang]]></title><description><![CDATA[instead of doing the classic thing and plowing forward with feature work and "cleaning it up later", we're going to pause for a bit to make sure we get this rewrite right.]]></description><link>https://blog.breaksoftware.xyz/blog/were-rewriting-capsul-in-golang/</link><guid isPermaLink="false">69069b8a3296b100014c2802</guid><dc:creator><![CDATA[Forest Johnson]]></dc:creator><pubDate>Sat, 05 Apr 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>today, forest and i discussed all of the future paths that capsul could take. we decided that before we launch any new features, we need to focus on our most burdensome source of technical debt - the language we chose.</p>
<p>when we first wrote capsul, we chose python + flask because we figured that was the most accessible language for our community in the case that random people wanted to contribute patches.</p>
<p>this didn't quite work out, and it wound up making capsul harder to work on for both forest &amp; myself. choosing python especially complicated the concurrency story - golang is simply very good at concurrency, and python is not.</p>
<p>BUT, we also want capsul to be as easy to self-host as possible. so instead of doing the classic thing and plowing forward with feature work and &quot;cleaning it up later&quot;, we're going to pause for a bit to make sure we get this rewrite right.</p>
<p>forest and i are both very experienced golang devs, and we think that we can make capsul much easier to run and maintain by moving from python to go. we aim to have a feature-complete parallel implementation in the coming weeks/months.</p>
<p>in fact, the rewrite is already underway - we're doing everything in a private repo for now, but stay tuned - we'll release something public just as soon as we can :)</p>
<p>love, <a href="https://j3s.sh">jes</a> &amp; <a href="https://sequentialread.com">forest</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[reintroducing capsul]]></title><description><![CDATA[we want to make capsul the best lil cloud there is. not just another 5$/month rent seeking project;  a genuinely better option which is more accessible, more fun, and still has the power and reliability to be much more than a toy.]]></description><link>https://blog.breaksoftware.xyz/blog/reintroducing_capsul/</link><guid isPermaLink="false">69069b0b3296b100014c27f4</guid><dc:creator><![CDATA[Forest Johnson]]></dc:creator><pubDate>Sat, 29 Mar 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><a href="https://capsul.org">capsul</a> has a rich history. born through <a href="https://cyberia.club">cyberia.club</a>, our first server was literally cobbled together on the rooftop of my apartment complex. about 10 of us were there, perhaps a little drunk, and we covered the first 1u server in stickers. it's still around the clubhouse at <a href="https://layerze.ro">layer zero</a> - we named the server "baikal", after the deepest lake in the world, and a joke on "cyberia" sounding like "siberia". here is a picture of baikal from back then:</p><figure class="kg-card kg-image-card"><img src="https://picopublish.sequentialread.com/files/baikal-old.jpg" class="kg-image" alt="1u laying next to empty anti-static bags, duct tape, and a glossy plastic PowerMac G4. goodnight white pride sticker. tor sticker. riseup.net sticker. old defcon sticker. chinese high voltage warning sticker. food not bombs sticker. comcast sucks sticker. eliminate DRM sticker. docker nyan cat sticker.  "></figure><p>when capsul started, it wasn't even called capsul - it was called "cvm", which stood for "cyberia virtual machines".</p><p>i created virtual machines for people by hand with a small set of shell scripts, and those people paid me in cash. cvm was cool like that - covert, human, and communal. i'd take 60 bux, write it down in a spreadsheet, and spin up a VM.</p><p>some vms still have a <code>cvm-</code> prefix internally.</p><p>eventually, <a href="https://sequentialread.com/">forest</a> came along and blessed cvm with a self-service web interface. at this point, we renamed cvm to capsul. it took forest about a week to pump out capsul v1, which looked like this:</p><figure class="kg-card kg-image-card"><img src="https://trash.j3s.sh/capsulv1.png" class="kg-image" alt="screenshot of capsul website with akira-inspired pill logo ascii art. Pricing, FAQ, Changelog, Support. Fast, private compute by cyberia.club"></figure><p>baikal + capsul ran many virtual machines for many people for nearly five years, with only a few minor mishaps along the way.</p><p>well, except...</p><h3 id="atlanta">atlanta</h3><p><a href="https://sequentialread.com/capsul-rumors-my-demise-greatly-exaggerated/">one time, the disks in baikal were basically about to explode themselves</a>, and we were too embarassed ask the <a href="https://cyberwurx.com/colocation/">CyberWurx</a> datacenter staff to try to help us fix it remotely after we had mailed them two different PCI-E SSD risers that didn't fit in the server chassis. So we had to fly to Atlanta (where baikal was in a rack at CyberWurx) to fix them. forest and i made a little trip of it:</p><figure class="kg-card kg-image-card"><img src="https://trash.j3s.sh/atlanta-1.jpg" class="kg-image" alt="forest and i on the floor, on our computas"></figure><figure class="kg-card kg-image-card"><img src="https://trash.j3s.sh/atlanta-2.jpg" class="kg-image" alt="baikal all comfy in the server rack"></figure><figure class="kg-card kg-image-card"><img src="https://trash.j3s.sh/atlanta-3.jpg" class="kg-image" alt="forest &amp; i trying desperately to fix baikal's storage. forest is wearing a covid mask and doing his best impression of a frustrated caterpillar"></figure><p>in the end, we prevailed. we prevented a critical storage mishap, and baikal lived on for another several years.</p><h3 id="epic-emergency-server-migration-">EPIC emergency server migration 🤯</h3><figure class="kg-card kg-image-card"><img src="https://picopublish.sequentialread.com/files/discort03424.png" class="kg-image" alt="forest on discord saying &quot;WE DID IT!&quot; it reboots"></figure><ul><li>baikal (our old server) could no longer handle the load, was constantly crashing</li><li>rathouse (our NEW server) was already racked up and ready to go</li><li>we wanted to wait until we could get backups working on the new system before we migrated... but real life had other plans for us</li><li>capsul was fully down for about a day and a half</li></ul><h3 id="stripe-doomsday">stripe doomsday</h3><figure class="kg-card kg-image-card"><img src="https://picopublish.sequentialread.com/files/stripe4209.png" class="kg-image" alt="screenshot of stripe error message: the information on your account could not be verified by the IRS"></figure><p>stripe cut off our credit card payments because the cyberia computer club's minnesota nonprofit status had lapsed after everyone who was doing the work of maintaining it faded away.</p><p>we had been the ones maintaining capsul for many years at that point, and in 2023, cyberia congress <a href="https://wiki.cyberia.club/hypha/congress_and_the_board/minutes/2023-07-30#Capsul_Conversation">officially released</a> capsul to be owned by us instead of being owned by the club.</p><p>so in order to get stripe payments working again, we "failed forward" and created a new legal entity that will be responsible for operating capsul: <a href="https://breaksoftware.xyz">break software llc</a></p><h3 id="future">future</h3><p>okay, so it's 2025. we recently installed a new server (rathouse), and have renewed energy for this project. forest and i are getting more serious about trying to make capsul a real thing.</p><p>we've been talking with great folks around various smallweb communities -- the kind of folks capsul was originally created for, and who we still want to offer a great service to. it's always been like this; we use capsul ourselves, and our own wants and needs as admins of an <a href="https://cyberia.club/">independent community-hosting project</a> have guided capsul's development:</p><ul><li>we needed better storage performance to make our chat server happy, so we adjusted the storage settings to speed up our disk reads and writes as much as possible.</li><li>we still wanted to be able to back up and restore our VMs, so we integrated <a href="https://github.com/abbbi/virtnbdbackup">virtnbdbackup</a> directly into capsul.</li></ul><p>we want to make capsul the best lil cloud there is. not just another 5$/month rent seeking project;  a genuinely better option which is more accessible, more fun, and still has the power and reliability to be much more than a toy.</p><p>we gave capsul a CSS makeover, a light theme, fixed some longstanding bugs, and updated all of our OS images, and we have a lot of ideas, but they're still baking.</p><p>if you wanna say hi, you can always send us email at <a>support@capsul.org</a> -- otherwise, we're always hanging out in <a href="https://cyberia.club/matrix">matrix</a>. :)</p><p>i'll try to post here at least monthly.</p><p>see you soon &lt;3 &lt;3 &lt;3</p><p>love,</p><p><a href="https://j3s.sh">j3s</a>, <a href="https://sequentialread.com/">forest</a></p>]]></content:encoded></item><item><title><![CDATA[rumors of my demise have been greatly exaggerated]]></title><description><![CDATA[Guess what? Yall loved capsul so much, you wore our disks out. Well, almost.]]></description><link>https://blog.breaksoftware.xyz/blog/rumors-of-my-demise-have-been-greatly-exaggerated/</link><guid isPermaLink="false">69069ab03296b100014c27ea</guid><dc:creator><![CDATA[Forest Johnson]]></dc:creator><pubDate>Fri, 17 Dec 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><hr>
<p>NOTE: there's a media-rich HTML version of this post on<br>
forest's blog:</p>
<p><a href="https://sequentialread.com/capsul-rumors-my-demise-greatly-exaggerated/">https://sequentialread.com/capsul-rumors-my-demise-greatly-exaggerated/</a></p>
<hr>
<pre><code>Forest                                         2021-12-17

                     WHAT IS THIS?

If you're a wondering “what is capsul?”, see:

https://capsul.org

Here's a quick summary of what's in this post:

    cryptocurrency payments are back


    we visited the server in person for maintenance


    most capsuls disks should have trim/discard support
    now, so you can run the fstrim command to optimize
    your capsul's disk. (please do this, it will save us
    a lot of disk space!!)


    we updated most of our operating system images and
    added a new rocky linux image!


    potential ideas for future development on capsul


    exciting news about a new server and a new capsul fork
    being developed by co-op cloud / servers.coop

                        ~



  WHAT HAPPENED TO THE CRYPTOCURRENCY PAYMENT OPTION?

Life happens. Cyberia Computer Club has been hustling
and bustling to build out our new in-person space in
Minneapolis, MN:

https://wiki.cyberia.club/hypha/cyberia_hq/faq

Hackerspace, lab, clubhouse, we aren't sure what to call
it yet, but we're extremely excited to finish with the
renovations and move in!

In the meantime, something went wrong with the physical
machine hosting our BTCPay server and we didn't have
anywhere convenient to move it, nor time to replace it,
so we simply disabled cryptocurrency payments
temporarily in September 2021.

Many of yall have emailed us asking “what gives??”,
and I'm glad to finally be able to announce that

“the situation has been dealt with”,

we have a brand new server and the blockchain syncing
process is complete, cryptocurrency payments in bitcoin,
litecoin, and monero are back online now!

    —&gt;   https://capsul.org/payment/btcpay   &lt;—

                        ~

  THAT ONE TIME CAPSUL WAS ALMOST fsync()'d TO DEATH

Guess what? Yall loved capsul so much, you wore our disks
out. Well, almost.

We use redundant solid state disks + the ZFS file system
for your capsul's block storage needs, and it turns out
that some of our users like to write files. A lot.

Over time, SSDs will wear out, mostly dependent on how
many writes hit the disk. Baikal, the server behind
capsul.org, is a bit different from a typical desktop
computer, as it hosts about 100 virtual machines, each
with thier own list of application processes, for over 50
individual capsul users, each of whom may be providing
services to many other individuals in turn.

The disk-wear-out situation was exacerbated by our
geographical separation from the server; we live in
Minneapolis, MN, but the server is in Georgia. We wanted
to install NVME drives to expand our storage capacity
ahead of growing demand, but when we would mail PCI-e to
NVME adapters to CyberWurx, our datacenter colocation
provider, they kept telling us the adapter didn't fit
inside the 1U chassis of the server.

At one point, we were forced to take a risk and undo the
redundancy of the disks in order to expand our storage
capacity and prevent “out of disk space” errors from
crashing your capsuls. It was a calculated risk, trading
certain doom now for the potential possibility of doom
later.

Well, time passed while we were busy with other projects,
and those non-redundant disks started wearing out.
According to the “smartmon” monitoring indicator, they
reached about 25% lifespan remaining. Once the disk
theoretically hit 0%, it would become read-only in order
to protect itself from total data loss.
So we had to replace them before that happened.

https://picopublish.sequentialread.com/files/smartmon_dec2021.png

We were so scared of what could happen if we slept on
this that we booked a flight to Atlanta for maintenance.
We wanted to replace the disks in person, and ensure we
could restore the ZFS disk mirroring feature.

We even custom 3d-printed a bracket for the tiny PCI-e
NVME drive that we needed in order to restore redundancy
for the disks, just to make 100% sure that the
maintenance we were doing would succeed &amp; maintain
stability for everyone who has placed thier trust in us
and voted with thier shells, investing thier time and
money on virtual machines that we maintain on a volunteer
basis.

https://picopublish.sequentialread.com/files/silly-nvme-bracket2.jpg

Unfortunately, “100% sure” was still not good enough,
the new NVME drive didn't work as a ZFS mirroring partner
at first ⁠— the existing NVME drive was 951GB, and the
one we had purchased was 931GB. It was too small and ZFS
would not accept that. f0x suggested:

    [you could] start a new pool on the new disk,
    zfs send all the old data over, then have an
    equally sized partition on the old disk then add
    that to the mirror



But we had no idea how to do that exactly or how long it
would take &amp; we didn't want to change the plan at the
last second, so instead we ended up taking the train from
the datacenter to Best Buy to buy a new disk instead.

The actual formatted sizes of these drives are typically
never printed on the packaging or even mentioned on PDF
datasheets online. When I could find an actual number
for a model, it was always the lower 931GB.
So, we ended up buying a “2TB” drive as it was the only
one BestBuy had which we could guarantee would work.

So, lesson learned the hard way. If you want to use ZFS
mirroring and maybe replace a drive later, make sure to
choose a fixed partition size which is slightly smaller
than the typical avaliable space on the size of drive
you're using, in case the replacement drive was
manufactured with slightly less avaliable formatted
space!!!

Once mirroring was restored, we made sure to test it
in practice by carefully removing a disk from the server
while it's running:

https://picopublish.sequentialread.com/files/zfs_disk_replacement/

While we could have theoretically done this maintenance
remotely with the folks at CyberWurx performing the
physical parts replacement per a ticket we open with
them, we wanted to be sure we could meet the timeline
that the disks had set for US. That's no knock on
CyberWurx, moreso a knock on us for yolo-ing this server
into “production” with tape and no test environment :D

The reality is we are vounteer supported. Right now
the payments that the club receives from capusl users
don't add up to enough to compensate (make ends meet for)
your average professional software developer or sysadmin,
at least if local tech labor market stats are to be
believed.

We are all also working on other things, we can't devote
all of our time to capsul. But we do care about capsul,
we want our service to live, mostly because we use it
ourselves, but also because the club benefits from it.

We want it to be easy and fun to use, while also staying
easy and fun to maintain. A system that's agressively
maintained will be a lot more likely to remain maintained
when it's no one's job to come in every weekday for that.

That's why we also decided to upgrade to the latest
stable Debian major version on baikal while we were
there. We encountered no issues during the upgrade
besides a couple of initial omissions in our package
source lists. The installer also notified us of several
configuration files we had modified, presenting us with
a git-merge-ish interface that displayed diffs and
allowed us to decide to keep our changes, replace our
file with the new version, or merge the two manually.

I can't speak more accurately about it than that, as
j3s did this part and I just watched :)

                        ~

               LOOKING TO THE FUTURE

We wanted to upgrade to this new Debian version because
it had a new major version of QEMU, supporting virtio-blk
storage devices that can pass-through file system discard
commands to the host operating system.

We didn't see any benefits right away, as the vms
stayed defined in libvirt as their original machine types,
either pc-i440fx-3.1 or a type from the pc-q35 family.

After returning home, we noticed that when we created
a new capsul, it would come up as the pc-i440fx-5.2
machine type and the main disk on the guest would display
discard support in the form of a non-zero DISC-MAX size
displayed by the lsblk -D command:

localhost:~# sudo lsblk -D
NAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sr0         0        0B       0B         0
vda       512      512B       2G         0

Most of our capsuls were pc-i440fx ones, and we upgraded
them to pc-i440fx-5.2, which finally got discards working
for the grand majority of capsuls.

If you see discard settings like that on your capsul,
you should also be able to run fstrim -v / on your
capsul which saves us disk space on baikal:

welcome, cyberian ^(;,;)^
your machine awaits

localhost:~# sudo lsblk -D
NAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sr0         0        0B       0B         0
vda       512      512B       2G         0

localhost:~# sudo fstrim -v /
/: 15.1 GiB (16185487360 bytes) trimmed

^ Please do this if you are able to!

You might also be able to enable an fstrim service or
timer which will run fstrim to clean up and optimize
your disk periodically.

However, some of the older vms were the pc-q35 family of
QEMU machine type, and while I was able to get one of
ours to upgrade to pc-i440fx-5.2, discard support still
did not show up in the guest OS. We're not sure what's
happening there yet.

We also improved capsul's monitoring features; we began
work on proper infrastructure-as-code-style diffing
functionality, so we get notified if any key aspects of
your capsuls are out of whack. In the past this had been
an issue, with DHCP leases expiring during maintenance
downtimes and capsuls stealing each-others assigned IP
addresses when we turn everything back on.

capsul-flask now also includes an admin panel with
1-click-fix actions built in, leveraging this data:

https://git.cyberia.club/cyberia/capsul-flask/src/commit/b013f9c9758f2cc062f1ecefc4d7deef3aa484f2/capsulflask/admin.py#L36-L202

https://picopublish.sequentialread.com/files/admin-panel.jpg

I acknowledge that this is a bit of a silly system,
but it's an artifact of how we do what we do. Capsul
is always changing and evolving, and the web app was
built on the idea of simply “providing a button for”
any manual action that would have to be taken,
either by a user or by an admin.

At one point, back when capsul was called “cvm”,
everything was done by hand over email and the
commandline, so of course anything that reduced the
amount of manual administration work was welcome,
and we are still working on that today.

When we build new UIs and prototype features, we learn
more about how our system works, we expand what's
possible for capsul, and we come up with new ways to
organize data and intelligently direct the venerable
virtualization software our service is built on.

I think that's what the “agile development” buzzword from
professional software development circles was supposed to
be about: freedom to experiment means better designs
because we get the opportunity to experience some of the
consequences before we fully commit to any specific
design. A touch of humility and flexibility goes a
long way in my opinion.

We do have a lot of ideas about how to continue
making capsul easier for everyone involved, things
like:

    Metered billing w/ stripe, so you get a monthly bill
    with auto-pay to your credit card, and you only pay
    for the resources you use, similar to what service
    providers like Backblaze do.



   (Note: of course we would also allow you to
   pre-pay with cryptocurrency if you wish)

    Looking into rewrite options for some parts of the
    system: perhaps driving QEMU from capsul-flask
    directly instead of going through libvirt,
    and perhaps rewriting the web application in golang
    instead of sticking with flask.


    JSON API designed to make it easier to manage capsuls
    in code, scripts, or with an infrastructure-as-code
    tool like Terraform.


    IO throttling your vms:
    As I mentioned before, the vms wear out the disks
    fast. We had hoped that enabling discards would help
    with this, but it appears that it hasn't done much
    to decrease the growth rate of the smartmon wearout
    indicator metric.
    So, most likely we will have to enforce some form of
    limit on the amount of disk writes your capsul can
    perform while it's running day in and day out.
    80-90% of capsul users will never see this limit,
    but our heaviest writers will be required to either
    change thier software so it writes less, or pay more
    money for service. In any case, we'll send you a
    warning email long before we throttle your capsul's
    disk.



And last but not least, Cybera Computer Club Congress
voted to use a couple thousand of the capsulbux we've
recieved in payment to purchase a new server, allowing
us to expand the service ahead of demand and improve our
processes all the way from hardware up.

(No tape this time!)

https://picopublish.sequentialread.com/files/baikal2

Shown: Dell PowerEdge R640 1U server with two
10-core xeon silver 4114 processors and 256GB of RAM.
(Upgradable to 768GB!!)

                        ~

                    CAN I HELP?

Yes! We are not the only ones working on capsul these
days. For example, another group, https://coopcloud.tech
has forked capsul-flask and set up thier own instance at

https://yolo.servers.coop

Thier source code repository is here
(not sure this is the right one):

https://git.autonomic.zone/3wordchant/capsul-flask

Having more people setting up instances of capsul-flask
really helps us, whether folks are simply testing or
aiming to run it in production like we do.

Unfortunately we don't have a direct incentive to
work on making capsul-flask easier to set up until folks
ask us how to do it. Autonomic helped us a lot as they
made thier way through our terrible documentation and
asked for better organization / clarification along the
way, leading to much more expansive and organized README
files.

They also gave a great shove in the right direction when
they decided to contribute most of a basic automated
testing implementation and the beginnings of a JSON API
at the same time. They are building a command line tool
called abra that can create capsuls upon the users
request, as well as many other things like installing
applications. I think it's very neat :)

Also, just donating or using the service helps support
cyberia.club, both in terms of maintaing capsul.org and
reaching out and supporting our local community.

We accept donations via either a credit card (stripe)
or in Bitcoin, Litecoin, or Monero via our BTCPay server:

https://cyberia.club/donate

For the capsul source code, navigate to:

https://git.cyberia.club/cyberia/capsul-flask

As always, you may contact us at:

mailto:support@cyberia.club

Or on matrix:

#services:cyberia.club

For information on what matrix chat is and how to use it,
see: https://cyberia.club/matrix

Forest                                         2021-12-17

© Attribution-ShareAlike 4.0 International
    Cyberia Computer Club 2020-∞
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[rollin' onwards with a web application]]></title><description><![CDATA[We have to make our own seat belts for this, an
experience and practice that I personally feel is highly
under-rated.]]></description><link>https://blog.breaksoftware.xyz/blog/reintroducing-capsul/</link><guid isPermaLink="false">69069a433296b100014c27e0</guid><dc:creator><![CDATA[Forest Johnson]]></dc:creator><pubDate>Wed, 20 May 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><pre><code>
Forest                                         2020-05-20


                WHAT'S NEW IN CAPSUL?

Capsul has been operated by hand so far, with business
conducted via email. Obviously, this isn't the best
user experience. If no one is on the other end at the
time, the user might feel as if they are shouting into
the void.

Ideally, users could pay for service, create and destroy
capsuls, and monitor their capsul's status at any time.

So we set out to create an application enabling that,
while keeping things as simple as possible. As of today,
you can experience it firsthand!

            —&gt;   https://capsul.org/   &lt;—

      WHAT IS CAPSUL? WHY WOULD ANYONE DO THAT?

Capsul started out as a “for fun” project to host
multiple VMs with different operating systems on the same
physical server.

A cloud compute provider experiment to find out:

    How hard is it to build basic compute-as-service
    functionality that has been mythologized and
    commoditized by some of the biggest software
    businesses of all time.


    What problems have to be solved in order to do
    this at a small scale?


    And last but not least,
    how much better-than-the-big-boys can we do? :P



I heard about Capsul and I thought, cool, why not.

At first, I was slightly dismissive of the project —
why re-invent the wheel? There are lots of established
tools for creating cloud services already out there,
surely they would be hard for us to measure up to.

Of course, you could argue, that's not the point.
It's all about the journey, popping the hood and learning
how things are put together.

But on the other hand, Capsul is something that we want
to use, not just a pet project.

               Can I depend on it?

            /⁀⁀\                  __/‾⁀|,_
         (‾‾____‾‾)        |      xx . .|
            /  \           |       [   &gt;)
           /    \          |       ‶` ‾
                           |

      I WANT TO BELIEVE    |    (X)  DOUBT

Whether excited or doubtful, the tone of the question
expresses the real utility and risk associated with DIY.

We have to make our own seat belts for this, an
experience and practice that I personally feel is highly
under-rated.

I don't want to give up and just leave it to the experts.

I want to build the confidence necessary to make my own
systems, and to measure thier stability and efficacy.

                        (\_/)
                        [. .]
                       ==&lt;.&gt;==

                 “ Anyone can Cook “

It probably helps that I've never seen a friend get hurt
because of a flaw in something I designed, but even if
I had, I'd like to think that I'd continue believing
in the idea that technology is never “beyond” us.
I could never make it through Technoholics Anonymous,
because I'd never be able believe a Higher Power will
restore sanity to the machine and save us from ourselves.

           ABOUT THE DEVELOPMENT PROCESS

First step was to chose a language and framework.
We made this decision (Python 3, Flask) almost entirely
based on which language was the most commonly known in
our group. I was the only one who had never used Python
before, and I felt up to the task of learning a language
as a part of this process.

Next, we had to decide how the system would work.

How would we secure user's accounts?  How would users
pay for capsuls?  Would it be like a subscription,
would you buy compute credits, or a receive a bill at
the end of the month?

In the interest of simplicity, we opted to use a
tumblr-style magic-link login instead of requiring
the user to provide a password. So, you have to
receive an email and click a link in that email
every time you log in.

We also decided to go with the “purchase credits, then
create capsul” payment workflow, because it was the
easiest way we could accept both credit card and
cryptocurrency payments, and we believed that requiring
the user to pay first was an appropriate level of
friction for our service, at least right now.

I had never worked on a project that integrated
with a payment processor or had a “dollars” column in a
database table before. I felt like I worked at the
Federal Reserve, typing

INSERT INTO payments (account, dollars) VALUES
    ('forest', 20.00);

into my database during development.

The application has three backends:

    a postgres database where all of the payment and
    account data is stored


    the virtualization backend which lifecycles the
    virtual machines and provides information about them
    (whether or not they exist, and current IP address)


    Prometheus metrics database which allows the
    web application to display real-time metrics for each
    capsul.



All of the payments are handled by external payment
processors Stripe and BTCPay Server, so the application
doesn't have to deal with credit cards or cryptocurrency
directly. What's even better, because BTCPay Server
tracks the status of invoices automatically, we can
accept unconfirmed transactions as valid payments and
then rewind the payment if we learn that it was a
double-spend attack. No need to bother the user about
Replace By Fee or anything like that.

The initial development phase took one week. Some days
I worked on it for 12+ hours, I think. I was having a
blast. I believe that the application should be secure
against common types of attacks. I kept the OWASP
Top 10 Web Application Security Risks in mind while I was
working on this project, and addressed each one.

    Injection
    We use 100% parameterized queries, and we apply strict
    validation to all arguments of all shell scripts.


    Broken Authentication
    We have used Flask's session implementation,
    we did not roll our own sessions.


    Sensitive Data Exposure
    We do not handle particularly sensitive data such as
    cryptocurrency wallets or credit card information.


    XML External Entities (XXE)
    We do not parse XML.


    Broken Access Control
    We have added the user's email address to all database
    queries that we can. This email address comes from the
    session, so hopefully you can only ever get information
    about YOUR account, and only if you are logged in.


    Security Misconfiguration
    We made sure that the application does not display error
    messages to the user, we are not running Flask in
    development mode, we are not running Flask as the root
    user, the server it runs on is well secured and up to
    date, etc.


    Cross-Site Scripting (XSS)
    We apply strict validation to user inputs that will be
    represented on the page, whether they are path variables,
    query parameters, form fields, etc.


    Insecure Deserialization
    We use the most up-to-date json parsing from the
    Python standard library.


    Using Components with Known Vulnerabilities
    We did check the CVE lists for any known issues with the
    versions of Flask and psycopg2 (database connector),
    requests, and various other packages that we are using,
    although automating this process would be much better
    going forward.


    Insufficient Logging &amp; Monitoring
    We may have some room for improvement here, however,
    verbose logging goes slightly against the “we don't
    collect any more data about you than we need to” mantra.



If you would like to take a peek at the code, it's
hosted on our git server:

https://git.cyberia.club/cyberia/capsul-flask

© Attribution-ShareAlike 4.0 International
    Cyberia Computer Club 2020-∞
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[simple trusted compute: announcing capsul]]></title><description><![CDATA[spent countless nights testing different configurations.
We strived to make the service very simple, and very
maintainable. We're very proud of what we're announcing
today. We think it's a very unique service.]]></description><link>https://blog.breaksoftware.xyz/blog/simple-trusted-compute-announcing-capsul/</link><guid isPermaLink="false">690690d53296b100014c27c1</guid><dc:creator><![CDATA[j3s]]></dc:creator><pubDate>Wed, 11 Mar 2020 08:30:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Note: this post is ancient and out of date!</p>
<pre><code>

+———————————————————————————+ 
|                           |
|     ANNOUNCING CAPSUL     |
|                           |
+———————————————————————————+

https://capsul.org

Over the last year we've moved at light speed. Cyberia
Computer Club is now an entity. A formal nonprofit
organization with a democratic structure.

We organized and bought a server. We crowdfunded, and
spent countless nights testing different configurations.
We strived to make the service very simple, and very
maintainable. We're very proud of what we're announcing
today. We think it's a very unique service.

Capsul is a service that provides people with compute in
the form of virtual machines. All machines run on very
fast solid state storage, and have direct T3 network
access on a shared link. We do not collect user data
(besides your email address), and discard as many logs as
we feasibly can. Every VM is automatically backed up
A more official privacy policy and TOS are coming soon.

To get you excited, here's a list of initially supported
operating systems:

          operating system  supported
          ————————  ————-
          alpine            yes
          ubuntu18          yes
          debian10          yes
          centos7           yes
          centos8           yes
          OpenBSD 6.6       planned
          GuixSD 1.0.1      planned
          Windows           no, never
          AIX               whyyyy

Our prices start at ~$5.99 a month:

        type    yearly cost  cpus  memory  ssd
        ———  —————–  ——  ———  ——
        f1-s    $70          1     512M    10G
        f1-m    $120         1     1024M   25G
        f1-l    $240         1     2048M   55G
        f1-x    $480         2     4096M   80G
        f1-xx   $960         4     8096M   160G
        f1-xxx  $1920        8     16G     320G

Capsul is very easy to use – no signup or registration is
necessary. Simply send an email to capsul@c3f.net with
your requirements, and you'll have VMs that you can ssh
into within a day or so.

Capsul machines are currently paid for on a yearly basis,
and we'll make every effort to remind you of payment
before your year expires.

    What sets Capsul apart?



Simply: our organization and our morality.

Cyberia Computer Club values privacy, simplicity,
transparency, accessibility, and inclusion.  We have no
shareholders, investors, or loaners, therefore every
change we make is directly beneficial to you. We actually
care about your experience, and it will only get better
with time – never worse.

We have a lot more coming for Capsul. The next planned
features include:
– private networking
– openbsd support
– monthly payments
– instant provisioning and decoms
– ipv6 support (with a reduced price instance type)
– a storage service (for those who want pictures)

That's all for now! Send us an email and get started with
Capsul today! :)

love,

j3s

additional resources;

Check out the Capsul website: https://capsul.org
Check out our bylaws here: https://cyberia.club/bylaws
Donate to the cause: https://cyberia.club/donate
All of our source code: https://git.cyberia.club
Chat with us on Matrix: #cyberia:cyberia.club
Chat with us on IRC: #cyberia on freenode

© Attribution-ShareAlike 4.0 International
    Cyberia Computer Club 2020-∞
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>