code logs -> 2023 -> Sat, 22 Jul 2023< code.20230721.log - code.20230723.log >
--- Log opened Sat Jul 22 00:00:13 2023
00:14 Emmy [Emmy@Nightstar-qo29c7.fixed.kpn.net] has quit [Ping timeout: 121 seconds]
00:30 Pink [Pink@Nightstar-vtorv0.sub-75-236-184.myvzw.com] has joined #code
00:54 vladko [vladkozoch@Nightstar-cg2.srs.98.185.IP] has joined #code
00:54
< vladko>
hi
01:29
<&[R]>
Hey
01:30
<&[R]>
Only have 6TB of stuff to pull off of my failing BTRFS array D:
01:31
<&ToxicFrog>
eek
01:32
<&[R]>
Got 9TB out already
01:33
<&[R]>
The disks seem to be mostly functional, they just start crapping out if they've been on too long
01:33
<&[R]>
(SMART reports all clear on them)
01:34
<&[R]>
On the flip side, I'm pretty happy with moosefs so far
01:35
<&ToxicFrog>
I've never even heard of moosefs
01:35
<&[R]>
It's pretty close to something I've been looking for forever
01:35
<&[R]>
Basically a distributed JBOD array
01:36
<&ToxicFrog>
Aah, I see
01:36
<&[R]>
Each disk is meant to be formatted individually (they suggest XFS) and each host providing storage runs a single instance of the "chunk server", and you have one instance of the master server on the network
01:37
<&[R]>
Default is 3 duplicates for each chunk
01:37
<&[R]>
Chunks are 64kB
01:37
<&[R]>
Unfortunately, it's absolutely shit if you're storing a ton of small files (eg: a source tree)
01:38
<&[R]>
I have directories that take 30GB on ext4, but take 80GB on moose
01:38
<&[R]>
But it also keep tracks of any disks that seem to be failing, so you can identify the specific disk easily
01:39
<&[R]>
And it auto-rebalances itself it seems
01:42
<&[R]>
<&[R]> Default is 3 duplicates for each chunk <-- typo, 2 duplicates is the default
01:44 * ToxicFrog nods
01:45
<&ToxicFrog>
I don't have enough servers for the "distributed" part to really be useful, so my setup is just sshfs/nfs/smb backed up ZFS
01:45
<&[R]>
https://termbin.com/jwv9
01:45
<&[R]>
Fair enough
01:45
<&[R]>
I'll be adding a bunch of disks to it soon
01:46
<&[R]>
(Note: I'm not actually reading from it ATM, I am writing to it. The reads seem to be the rebalancing I mentioned)
01:46
<&[R]>
Also you can set replication per directory
01:47
<&[R]>
So you can have a low-replicated scratch directory, and a high-replication critical directory. It also retains files 24h after it was deleted (by default, you can configure this)
01:49 * ToxicFrog nods
01:49
<&ToxicFrog>
ZFS lets you set replication per dataset, which is more coarse than per directory
01:49
<&ToxicFrog>
But of course it's not replication across systems, just across disks
01:49
<&[R]>
ZFS requires thought and preparation
01:50
<&[R]>
Likewise, so does ceph (which is in the same space as moosefs)
02:10
<&ToxicFrog>
And moose does not?
02:11
<&[R]>
It's JBOD
02:11
<&[R]>
You just add disks and/or chunkservers
02:12
<&[R]>
ZFS require that you make pools, the pools have to have considerations on how big they are, etc...
02:15
<&ToxicFrog>
I mean, ZFS also lets you do that, the default pool mode is "just add a bunch of disks idk"
02:15
<&[R]>
That runs counter to everything I was told about it D:
02:20
<&ToxicFrog>
So, a zfs storage pool is made up of a bunch of devices. Data is striped (and optionally replicated) across them in some manner that the zfs driver considers convenient. Devices making up the pool don't need to be the same size and you can add and remove devices post hoc. Each pool has a bunch of settings, some of which need to be set when the pool is created and cannot subsequently be
02:20
<&ToxicFrog>
adjusted, some of which can be changed but take effect only on newly written data or newly added devices, and some of which can be changed freely.
02:20
<&ToxicFrog>
Within the pool you can create any number of datasets. These are basically filesystems with their own options (including compression, replication, etc) and mountpoints, but they all share the same underlying storage.
02:21
<&ToxicFrog>
The individual devices making up the pool can be disks (or partitions, or ISCSI targets, or whatever, basically any block device), but they can also be ZFS mirrors or RAID-Z devices made up of multiple block devices.
02:22
<&ToxicFrog>
Those do need forward planning; mirrors are restricted to the size of the smallest disk, RAID-Z devices likewise and (unlike mirrors) you can't add or remove drives after creating them either.
02:22
<&ToxicFrog>
But you also don't actually need to use those, you can just add a bunch of individual disks as top-level devices.
02:23
<&[R]>
That's good to know
02:25
<&ToxicFrog>
(although, notably, if you have a pool made up of a bunch of top-level devices and one of them fails hard, you lose the entire pool, not just data stored on that specific disk; replication is there in case of on-disk corruption and, unlike mirrors or RAID-Z, will not protect you from a drive just straight up vanishing.)
02:25
<&ToxicFrog>
(based on knowledge of other cluster filesystems, I suspect that with mooseFS, if one disk goes away the rest of the filesystem remains accessible, and if you had sufficient replicas you have lost no data?)
02:26
<&[R]>
Correct
02:29
<&[R]>
Though, it does rely on underlying filesystems
02:30
<&[R]>
All chunk servers have a metadata file which is strongly recomended to be on RAID, master also has a metadata file, and it's on-disk format relies on an underlying file-system
02:32
<&[R]>
The clients are also expected to connect directly to the chunk servers (not just the master)
02:33
<&[R]>
But, it also has some partitioning schemes for when you have multiple data centers, and links between those data centers have costs
02:33 Emmy [Emmy@Nightstar-qo29c7.fixed.kpn.net] has joined #code
02:33
<&[R]>
I haven't set that up myself though
02:58 vladko [vladkozoch@Nightstar-cg2.srs.98.185.IP] has quit [Ping timeout: 121 seconds]
03:40 Degi [Degi@Nightstar-9o4.1bk.55.78.IP] has quit [Ping timeout: 121 seconds]
03:41 Degi [Degi@Nightstar-5gr.flg.183.77.IP] has joined #code
03:47
< Alek>
....... can you pronounce BTRFS as "butterface"???
03:59
<@macdjord>
Alek: Well, you /can/, but only because no one has yet invented a way to defenestrate people over the internet.
04:55
< Alek>
:P
12:33
< Emmy>
DoIPAAS?
12:34
< Emmy>
Say, i have a question
12:34
< Emmy>
Lately, my PC has a habit of uncommanded wakeups out of hibernation.
12:34
< Emmy>
Any possible causes, or fixes? (aside from not hibernating LOL)
14:25
<@gnolam>
I don't know, but I'd stay clear of it - they're pretty cranky after they wake up out of hibernation. Also, never get between a PC and its cubs.
14:47
<&[R]>
You mean hibernation, and not sleep, correct?
14:47
<&[R]>
Do you have a mouse/keyboard plugged in while you're hibernating? (I'm assuming a laptop)
14:57 Vornicus [Vorn@ServerAdministrator.Nightstar.Net] has joined #code
14:57 mode/#code [+qo Vornicus Vornicus] by ChanServ
16:37
< abudhabi>
Why is Bing such garbage?
16:40 Vornicus [Vorn@ServerAdministrator.Nightstar.Net] has quit [Connection closed]
16:42
<&[R]>
Because MS made it?
17:13
<&[R]>
TBF though, Google's pretty shit right now too
17:14
<&[R]>
(I meant their search, but their general attitude towards things is also shitty)
17:40
<&[R]>
I feel like I've spent too much time this last week just watching file transfers go D:
17:40
<&[R]>
Gigabit feels so slow now
18:10
< Emmy>
[r], yes, hibernation, not sleep, and no, not laptio, midsize tower
18:12
< Emmy>
https://nl.pcpartpicker.com/list/TNJnXy
18:13
< Emmy>
^ Specifically.
18:13
< Emmy>
Gigabit too slow? why not gigapigeon?
18:18
<&[R]>
wut
18:19
<&[R]>
Emmy: does moving the mouse at all wake it up from hibernate?
18:19
<&[R]>
What about hitting a key on the keyboard?
18:19
<&[R]>
What other peripherals do you have plugged in?
18:19
<&[R]>
Is WoL configured?
18:19
<&[R]>
Is there a wake-up time configured?
18:20
< Emmy>
wake-up is sometimes instant, never at the same moment, so i don't think it's wakeup time.
18:21
< Emmy>
mouse and keeb are the only (USB) peripherals, the rest is DVI-D and analog sound
18:22
<&[R]>
Instant wake-up suggests it's not actually hibernating
18:22
<&[R]>
Have you killed power to it while it's hibernating? Then tried to resume? Did that actually work?
18:22
< Emmy>
yes
18:22
< Emmy>
not instant as in, very quick wakeup, instant as in immediately start up again
18:23
<&[R]>
Okay
18:23
<&[R]>
Can the mouse and/or keyboard wake it up?
18:25
< Emmy>
hmmmh. I'd have to test that.
18:25
< Emmy>
BRB
18:26 Emmy [Emmy@Nightstar-qo29c7.fixed.kpn.net] has quit [Connection closed]
18:26 Emmy [Emmy@Nightstar-qo29c7.fixed.kpn.net] has joined #code
18:26
< Emmy>
Hmmmh, mouse movement didn't, but a click woke it up. :/
18:27
<&[R]>
ATM I'd suspect accidental mouse or keyboard press
18:28
< Emmy>
Well, then i'll have to check for ghosts, since it also happens while asleep or at work
18:29
<&[R]>
Clean desk?
18:29
< Emmy>
Well, around the mouse and keyboard at least
18:30
< Emmy>
no living, furry mice at least. XD
18:30
<&[R]>
No pets either?
18:30
< Emmy>
Nope
18:31
< Emmy>
i mean, that would be prime cat behaviour, but i have no cats
18:31
< Emmy>
Well, i set USB selective suspend to disabled. maybe that'll help
18:31
< Emmy>
wakeup timers already were disabled.
18:31
<&[R]>
WoL is the last thing I'd suspect, might have been setup to be used at some point?
18:39
< Emmy>
Maybe. I just found that in device manager > network adapters > my ethernet card > Power Management > 'Allow this device to wake the computer' was on.
18:39
< Emmy>
might'
18:39
< Emmy>
might be it.
18:39
< Emmy>
I'll have to see if there's any change
18:40
< Emmy>
could've sworn i had that off.
18:40
<&[R]>
WoL requires that something sends a special packet
18:40
<&[R]>
Windows might do something stupider though
18:40
<&[R]>
Like wake-up to get updates
18:41
< Emmy>
I believe i killed that on this PC. At least i never have unexpected restarts.
22:34
< Alek>
Maybe dust your peripherals out, vacuum the keyboard? could there be something inside causing a signal short?
22:39 Kindamoody is now known as Kindamoody[zZz]
23:15 Vornicus [Vorn@ServerAdministrator.Nightstar.Net] has joined #code
23:15 mode/#code [+qo Vornicus Vornicus] by ChanServ
--- Log closed Sun Jul 23 00:00:15 2023
code logs -> 2023 -> Sat, 22 Jul 2023< code.20230721.log - code.20230723.log >

[ Latest log file ]