• czardestructo@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    12 days ago

    I didn’t down vote you but honestly interested in an adult conversation regarding your stance. If all I use is MS365 and I can use it in a web app with full 2FA how am I a security risk? I can access all the same things on my personal laptop, nothing is blocked, so how is Linux different?

    • Saik0@lemmy.saik0.com
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      edit-2
      11 days ago

      The adult conversation would begin with you don’t get to change things about stuff that you don’t own without permission from the owner, it’s not yours. It belongs to the company. Materially changing it in any way is a problem when you do not have permission to do so.

      Most of this answer would fully depend on what operations the company actually conducts. In my case, our platform has something on the order of millions of records of background checks, growing substantially every day. SSNs, Court records, credit reports… very long list of very very identifiable information.

      Even just reinstalling windows with default settings is an issue in our environment because of the stupid AI screen capture thing windows does now on consumer versions.

      I’m a huge proponent of Linux. Just talk to the IT people in your org… many of them will get you a way to get off the windows boat. But it has to still be done in a way that meets all the security audits/policies/whatever that the company must adhere to. Once again, I deal a lot with compliance. If someone is found to be out of compliance willingly, we MUST take it seriously. Even ignoring the obvious risk of data leakage, just to maintain compliance with insurance liability we have to take documented measures everywhere.

      Many defaults linux installs don’t meet policy minimums, here’s an example debian box in a testing environment with default configurations from the installer. Which is benched against this standard https://www.cisecurity.org/benchmark/debian_linux.

      Endpoint security would be missing for your laptop if you jumped off our infrastructure. Tracking of assets would be completely gone (eg, stolen assets. Throwing away the cost of the hardware and risking whatever data that happens to be on the device to malicious/public use). File integrity monitoring. XDR services.

      Did I say that the device isn’t yours? If not, I’d like to reiterate that. It’s not yours. Obtaining root, or admin permissions on our device means we can no longer attest that the device is monitored for the entire audit period. That creates serious problems.

      Edit: And who cares about downvotes? But I know it wasn’t you. It was a different lemmy.world user. Up/Downvotes are not private information.

      Edit2: Typo’s and other fixings that bothered me.

      Edit3: For shits and giggles, I spun up a CLI only windows 2022 server (we use these regularly, and yes you can have windows without the normal GUI) and wanted to see what it looks like without our hardening controls on it… The answer still ends up being that all installs need configuration to make them more secure than their defaults if your company is doing anything serious.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        edit-2
        12 days ago

        Exactly. And this is why I refuse to work at companies like yours.

        It’s nothing personal, but I don’t want to work somewhere where I have to clear everything with an IT department. I’m a software engineer, and my department head told IT we needed Macs not because we actually do, but because they don’t support Macs so we’d be able to use the stock OS. I understand that company equipment belongs to the company, not to me, but I also understand that I’m hired to do a job, and dealing with IT gets in the way of that.

        I totally appreciate the value that a standard image provides. I worked in IT for a couple years while in school and we used a standard image with security and whatnot configured (even helped configure 802.1x auth later in my career), so I get it. But that’s my line in the sand, either the company trusts me to follow best practices, or I look elsewhere. I wouldn’t blatantly violate company policy by installing my own OS, I would just look for opportunities that didn’t have those policies.

        • Saik0@lemmy.saik0.com
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          12 days ago

          Exactly. And this is why I refuse to work at companies like yours.

          Then good luck to you?

          But you seemed to have missed the point. The images I share, are an SCA (Security Configuration Assessment)… They’re a “minimum configuration” standard. Not a standard image. Though that SCA does live as standard images in our virtualized environments for certain OSes. I’m sure if we had more physical devices out in the company-land we’d need to standardize more for images that get pushed out to them… But we don’t have enough assets out of our hands to warrant that kind of streamline.

          I’m a huge proponent of Linux. Just talk to the IT people in your org… many of them will get you a way to get off the windows boat. But it has to still be done in a way that meets all the security audits/policies/whatever that the company must adhere to.

          I literally go out of my way to get answers for folks who want off the windows boat. Go have a big boy adult conversation with your IT team. I’m linux only at home (to the point where my kids have NEVER used windows[especially these days with schools being chromium only]. And yes, they use arch[insert meme]), I’ve converted a bunch of our infra to linux that was historically windows for this company. If anyone wanted linux, I’d get you something that you’re happy with that met our policies. Your are outright limiting yourself to workplaces that don’t do any work in any auditable/certified field. And that seems very very short-sighted, and a quick way to limit your income in many cases.

          But you do you. My company’s dev team is perfectly happy. I would know, since I also do some dev work when time allows and work with them directly, regularly. Hell most of them don’t even do work on their work issued machines at all (to the point that we’ve stopped issuing a lot of them at their request) as we have web-based VDI stuff where everything happens directly on our servers. Much easier to compile something on a machine that has scalable processors basically at a whim (nothing like 128 server cores to blast through a compile) all of those images meet our specs as far as policy goes. But if you’re looking to be that uppity annoying user, then I am also glad that you don’t work in my company. With someone like you, would be when we lose our certification(s) during the next audit period or worse… lose consumer data. You know what happens when those things happen? The company dies and you and I both don’t have jobs anymore. Though I suspect that you as the user who didn’t want to work with IT would have a harder time getting hired again (especially in my industry) than I would for fighting to keep the companies assets secure… but that one damn user (and their managers) just went rogue and refused to follow policies and restrictions put in place…

          I’m a software engineer, and my department head told IT we needed Macs not because we actually do, but because they don’t support Macs so we’d be able to use the stock OS.

          No you don’t. There is no tool that is Mac-only that you would need where there is no alternative. This need is a preference or more commonly referenced as a “want”… not a need. Especially modern M* macs. If you walked up to me and told me you need something… and can’t actually quantify why or how that need supersedes current policy I would also tell you no. An exception to policy needs to outweigh the cost of risk by a significant margin. A good IT team will give you answers that meet your needs and the company’s needs, but company’s needs come first.

          either the company trusts me to follow best practices, or I look elsewhere

          So if I gave you a link to a remote VM, and you set it up the way you want. Then I come in after the fact and check it against our SCA… you’d score even close to a reasonable score? The fact that your so resistant to working with IT from the get-go proves to me that you would fail to get anywhere close to following “best practices”. No single person can keep track and secure systems these days. It’s just not fucking possible with the 0-days that pop out of the blue seemingly every other fucking hour. The company pays me to secure their stuff. Not you. You wasting your time doing that task inefficiently and incorrectly is a waste of company resources as well. “Best practice” would be the security folks handle the security of the company no?

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            12 days ago

            I’m linux only at home (to the point where my kids have NEVER used windows

            Same.

            I honestly don’t think this issue has anything to with our staff, but our corporate policies. Users can’t even install an alternative browser, which is why our devs only support Chrome (our users are all corporate customers).

            My issue has less to do with Windows (unacceptable for other reasons), but with lack of admin access. Our IT team eventually decided to have us install some monitoring software, which we all did while preserving root access on our devices.

            I would honestly prefer our corporate laptops (ThinkPads) over Apple laptops, but we’re not allowed to install Linux on them and have root access because corporate wants control (my words, not theirs).

            web-based VDI stuff where everything happens directly on our servers

            I don’t know your setup, but I probably wouldn’t like that, because it feels like solving the wrong problem. If compile times are a significant issue, you probably need to optimize your architecture because your app is probably a monolithic monster.

            I like cloud build servers for deployment, but I hate debugging build and runtime issues remotely. There’s always something that remote system is missing that I need, and I don’t want to wait a day or two for it to get through the ticket system.

            lose consumer data

            Customer data shouldn’t be on dev machines. Devs shouldn’t even have access to customer data. You could compromise every dev machine in our office and you wouldn’t get any customer data.

            The only people with that access are our devOPs team, and they have checks in place to prevent issues. If I want something from prod to debug an issue, I ask devOPs, who gets the request cleared by someone else before complying.

            I totally get the reason for security procedure, and I have no issue with that. My issue is that I need to control my operating system. Maybe I need to Wireshark some packets, or create a bridge network connection, or do something else no sane IT professional would expect the average user to need to do, and I really don’t want to deal with submitting a ticket and waiting a couple days every time I need to do something.

            There is no tool that is Mac-only that you would need where there is no alternative

            Exactly, but that’s what we had to tell IT so we wouldn’t have to use the standard image, which is super locked down and a giant pain when doing anything outside the Microsoft ecosystem. I honestly hate macOS, but if I squint a bit, I can make it almost make it feel like my home Linux system. I would’ve fought with IT a bit more, but that’s not what my boss ended up doing.

            We run our backend on Linux, and our customers exclusively use Windows, so there’s zero reason for us to use macOS (well, except our iOS builds, but we have an outside team that does most of that). Linux would make a ton more sense (with Windows in a VM), but the company doesn’t allow installing “unofficial” operating systems, and I guess my boss didn’t want to deal with the limited selection of Linux laptops. I’m even willing to buy my own machine if that would be allowed (it’s not, and I respect that).

            If our IT was more flexible, we’d probably be running Windows (and I wouldn’t be working there), but we went with macOS. Maybe we could’ve gotten Linux if we had a rockstar heading the dept, but our IT infra is heavy on Windows, so we’re pretty much the only group doing something different (corporate loves our product though, and we’re obsoleting other in-house tools).

            The fact that your so resistant to working with IT from the get-go proves to me that you would fail to get anywhere close to following “best practices”.

            No, I’ve just had really bad experiences with IT groups, to the point where I just nope out if something seems like a potential nightmare. If infra is largely Microsoft, the standard issue hardware runs Windows, and the software group that I’m interviewing with doesn’t have special exceptions, I have to assume it’s the bog standard “IT groups calls the shots” environment, and I’ll nope right on out. For me, it’s less about the pay and more about being able to actually do my job, and I’ll take a pay cut to not have to deal with a crappy IT dept.

            I’m sure there are good IT depts out there (and maybe that’s yours), but it’s nearly impossible to tell the good from the bad when interviewing a company. So I avoid anything that smells off.

            t’s just not fucking possible with the 0-days that pop out of the blue seemingly every other fucking hour.

            Yet, I’ve pointed out several security issues in our infra managed by a professional IT team, from zero days that could impact us to woefully outdated infra. I’m not perfect and I don’t believe anyone is, but just being in the IT position doesn’t mean you’re automatically better at keeping up with security patches.

            I’m usually the first to update on our team (I’m a lead, so I want to catch incompatibilities before they halt development), and I work closely with our internal IT team to stay updated. In fact, just Friday I asked about some potential concerns, and it turns out we were running into resource limits on devices hosted on Linux OSes that were already out of the security update window. So two issues caught by curiosity about something I saw in the code as it relates to infra I can’t (and shouldn’t) access. I don’t blame our team (they’re always understaffed IMO), but my point here is that security should be everyone’s concern, not just a team who locks down your device so you can’t screw the things up.

            If everything is exactly the same, everything will be compromised at the same time, so some variation (within certain controls) is a good thing IMO. Yet top down standardization makes that implausible.

            The company pays me to secure their stuff. Not you.

            The company also pays me to write secure, reliable software, and I can’t do that effectively if I can’t install the tools I need.

            Yes, IT professionals have their place, and IMO that’s on the infra side, not the end-user machine side. So set up the WiFi to block direct access between machines, segment the network using VLANs to keep resources limited to teams that need them, put a boundary between prod (Ops) and devs to contain problems, etc. But don’t take away my root access. I’m happy to enable a system report to be sent to IT so they can check package versions and open ports and whatnot, but let me configure my own machine.

            • Saik0@lemmy.saik0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              12 days ago

              I get your points. But we simply wouldn’t get along at all. Even though I’d be able to provide every tool you could possibly want in a secure, policy meeting way, and probably long before you actually ever needed it.

              but I hate debugging build and runtime issues remotely. There’s always something that remote system is missing that I need

              If the remote system is a dev system… it should never be missing anything. So if something’s missing… Then there’s already a disconnect. Also, if you’re debugging runtime issues, you’d want faster compile time anyway. So not sure why your “monolith” comment is even relevant. If it takes you 10 compiles to figure the problem out fully, and you end up compiling 5 minutes quicker on the remote system due to it not being a mobile chip in a shit laptop (that’s already setup to run dev anyway). Then you’re saving time to actually do coding. But to you that’s an “inconvenience” because you need root for some reason.

              but my point here is that security should be everyone’s concern, not just a team who locks down your device so you can’t screw the things up.

              No. At least not in the sense you present it. It’s not just locking down your device that you can’t screw it up. It’s so that you’re never a single point of failure. You’re not advocating for “Everyone looking out for the team”. You’re advocated that everyone should just cave and cater to your whim, rest of the team be damned. Where your whim is a direct data security risk. This is what the audit body will identify at audit time, and likely an ultimatum will occur for the company when it’s identified, fix the problem (lock down the machine to the policy standards or remove your access outright which would likely mean firing you since your job requires access) or certification will not be renewed. And if insurance has to kick in, and it’s found that you were “special” they’ll very easily deny the whole claim stating that the company was willfully negligent. You are not special enough. I’m not special enough, even as the C-suite officer in charge of it. The policies keep you safe just as much as it keeps the company safe. You follow it, the company posture overall is better. You follow it, and if something goes wrong you can point at policy and say “I followed the rules”. Root access to a company machine because you think you might one day need to install something on it is a cop out answer, tools that you use don’t change all that often that 2 day wait for the IT team to respond (your scenario) would only happen once in how many days of working for the company? It only takes one sudo command to install something compromised and bringing the device on campus or on the SDN (which you wouldn’t be able to access on your own install anyway… So not going to be able to do work regardless, or connect to dev machines at all)

              Edit to add:

              Users can’t even install an alternative browser, which is why our devs only support Chrome (our users are all corporate customers).

              We’re the same! But… it’s Firefox… If you want to use alternate browsers while in our network, you’re using the VDI which spins up a disposable container of a number of different options. But none of them are persistent. In our case, catering to chrome means potentially using non-standard chrome specific functions which we specifically don’t do. Most of us are pretty anti-google overall in our company anyway. So

              but it’s nearly impossible to tell the good from the bad when interviewing a company.

              This is fair enough.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 days ago

                you end up compiling 5 minutes quicker

                This implies the entire build still takes a few minutes on that beefier machine, which is in the “check back later” category of tasks. Rebuilds need to be seconds, and going from 10s to 5s (or even 30s) isn’t worth a separate machine.

                If my builds took that long, I’d seriously reconsider how the project is structured to dramatically reduce that. A fresh build taking forever is fine, you can do that at the end of the day or whatever, but edit/reload should be very fast.

                it’s so that you’re never a single point of failure

                That belongs at the system architecture level IMO. A dev machine shouldn’t be that interesting to an attacker since a dev only needs:

                • code and internal docs
                • test environments
                • “personal” stuff (paystubs, contracts, etc)
                • VPN config for remote access to test envs

                My access to all of the source material is behind a login, so IT can easily disable my access and entirely cut an attacker out (and we require refreshing fairly frequently). The biggest loss is IP theft, which only requires read permissions to my home directory, and most competitors won’t touch that type of IP anyway (and my internal docs are dev level, not strategic). Most of my cached info is stale since I tend to only work in a particular area at a given time (i.e. if I’m working on reports, I don’t need the latest simulation code). I also don’t have any access to production, and I’ve even told our devOPs team about things that I was able to access but shouldn’t. I don’t need or even want prod access.

                The main defense here is frequent updates, and I’m 100% fine with having an automated system package monitor, and if IT really wants it, I can configure sudo to send an email every time I use it. I tend to run updates weekly, though sometimes I’ll wait 2 weeks if I’m really involved in a project.

                if something goes wrong you can point at policy and say “I followed the rules”

                And this, right here, is my problem with a lot of C-suite level IT policy, it’s often more about CYA and less about actual security. If there was another 9/11, the airlines would point to TSA and say, “not my problem,” when the attack very likely came through their supply chain. “I was just following orders” isn’t a great defense when the actor should have known better. Or on the IT side specifically, if my machine was compromised because IT was late rolling out an update, my machine was still compromised, so it doesn’t really matter whose shoulders the blame lands on.

                The focus should be less on preventing an attack (still important) and more on limiting the impact of an attack. My machine getting compromised means leaked source code, some dev docs, and having to roll back/recreate test environments. Prod keeps on going, and any commits an attacker makes in my name can be specifically audited. It would take maybe a day to assess the damage, and that’s it, and if I’m regularly sending system monitoring packets, an automated system should be able to detect unusual activity pretty quickly (and this has happened with our monitoring SW, and a quick, “yeah, that was me” message to IT was enough).

                My machine is quite unlikely to be compromised in the first place though. I run frequent updates, I have a high quality password, and I use a password manager (with an even better password, that locks itself after a couple hours) to access everything else. A casual drive-by attacker won’t get much beyond whatever is cached on my system, and compromising root wouldn’t get much more.

                For your average office worker who only needs office software and a browser, sure, lock that sucker down. But when you’re talking about a development team that may need to do system-level tweaks to debug/optimize, do regular training or something so they can be trusted to protect their system.

                tools that you use don’t change all that often

                Sure, but when I need them, I need them urgently. Maybe there’s a super high-priority bug on production that I need to track down, and waiting 2 days isn’t acceptable, because we need same-day turnaround. Yeah, I could escalate and get someone over pretty quickly, but things happen when critical people are on leave, and IT can review things afterward. That’s pretty rare, and if I have time, I definitely run changes like that through our IT pros (i.e. “hey, I want to install X to do Y, any concerns?”).

                Most of us are pretty anti-google overall in our company anyway.

                Then maybe we’d be a better fit than I thought. If, during the interview process, I discovered that IT didn’t use MS or Google for their cloud stuff, I may actually be okay with a locked-down machine, because the IT team is absolutely based. I’d probably ask a lot of follow-up questions, and maybe you’d mitigate my concerns.

                But when shopping around for a new job, I steer clear of any red flags, and “even devs use standard IT images” and “we’re a MS shop” completely kills my interest. My current company is an MS shop, but they said we have our own infra for our team, and we use Macs specifically to avoid the standard, locked-down IT images.

                On my personal machines, I use Firefox, openSUSE (due to openQA, YaST, etc; TW on desktop, Leap on NAS and VPS), and full-disk encryption. I’m considering moving to MicroOS as well, for even better security and ease of maintenance. I expose internal services through a WireGuard tunnel, and each of those services runs in a docker container (planning to switch to podman). I follow cybersecurity news, and I’m usually fully patched at home before we’re patched at work. Cyber security is absolutely something I’m passionate about, and I raise concerns a few times/year, which our OPs team almost always acts on.

                All of that said, I absolutely don’t expect the keys to the kingdom, and I actually encourage our OPs team to restrict my access to resources I don’t technically need. However, I do expect admin access on my work machine, because I do sometimes need to get stuff done quickly.

                • Saik0@lemmy.saik0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 days ago

                  And this, right here, is my problem with a lot of C-suite level IT policy, it’s often more about CYA and less about actual security.

                  Remediation after an attack happens is part of the security posture. How does the company recover and continue to operate is a vital part of security incident planning. The CYA aspect of it comes from the legal side of that planning. You can take every best practice ever, but if something happens. Then what does the company do if it doesn’t have insurance fallback or other protections? Even a minor data breach can cause all sorts of legal troubles to crop up, even ignoring a litigious user-base. Having the policies satisfied keeps those protections in place. Keeps the company operating, even when an honest mistake causes a significant problem. Unfortunately it’s a required evil.

                  A casual drive-by attacker won’t get much beyond whatever is cached on my system, and compromising root wouldn’t get much more.

                  On a company computer? That’s presumably on a company network? Able to talk and communicate with all the company infrastructure? You seem to be specifically narrowing the scope to just your machine, when a compromised machine talks to way more than just the shit on the local machine. With a root jump-host on a network, I can get a lot more than just what’s cached on your system.

                  I discovered that IT didn’t use MS or Google for their cloud stuff,

                  We don’t use google at all if it’s at all possible to get away with it… We do have disposable docker images that can be spun up in the VDI interface to do things like test the web side of the program in a chrome browser (and Brave, chromium, edge, vivaldi, etc…). We do use MS for email (and by extension other office suite stuff cause it’s in the license, teams… as much as I fucking hate what they do to the GUI/app every other fucking month… is useful to communicate with other companies… as we often have to get on calls with API teams from other companies), but that’s it and nextcloud/libreoffice is the actual company storage for “cloud”-like functions… and there’s backup local mail host infrastructure laying in wait for the day that MS inevitably fucks up their product more than I’m willing to deal with their shenanigans as far as O365 mail goes.

                  I’m considering moving to MicroOS as well, for even better security and ease of maintenance.

                  I’m pushing for a rewrite out of an archaic 80’s language (probably why compile times suck for us in general) into Rust and running it on alpine to get rid of the need for windows server all together from our infrastructure… and for the low maintenance value of a tiny linux distro. I’m not particularly on the SUSE boat… just because it’s never come up. I float more on the arch side of linux personally, and debian for production stuff typically. Most of our standalone products/infrastructure are already on debian/alpine containers. Every year I’ve been here I’ve pushed hard to get rid of more and more, and it’s been huge as far as stability and security goes for the company overall.

                  “even devs use standard IT images”

                  No, it’s “even devs meet SCA”. Not necessarily a standard image. I pointed it out, but only in passing. I can spawn an SCA for many different linux os’s that enforce/prove a minimum security posture for the company overall. I honestly wouldn’t care what you did with the system outside of not having root and meeting the SCA personally. Most of our policy is effectively that but in nicer terms for auditing people. The root restriction is simply so that you can’t disable the tools that prove the audit, and by extension that I know as the guy ultimately in charge of the security posture, that we’ve done everything reasonable to keep security above industry standard.

                  The SCA checks for configuration hardening in most cases. That same Debian example I posted above, here’s a snippet of the checks

                  • sugar_in_your_tea@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    11 days ago

                    Able to talk and communicate with all the company infrastructure?

                    No, we have hard limits on what people can access. I can’t access prod infra, full stop. I can’t even do a prod deployment w/o OPs spinning up the deploy environment (our Sr. Support Eng. can do it as well if OPs aren’t available).

                    We have three (main) VPNs:

                    • corporate net - IT administrated internal stuff; don’t need for email and whatnot, but I do need it for our corporate wiki
                    • dev net - test infra, source code, etc
                    • OPs net - prod infra - few people have access (I don’t)

                    I can’t be on two at the same time, and each requires MFA. The IT-supported machines auto-connect to the corporate VPN, whereas as a dev, I only need the corporate VPN like once/year, if that, so I’m almost never connected. Joe over in accounting can’t see our test infra, and I can’t see theirs. If I were in charge of IT, I would have more segmentation like this across the org so a compromise at accounting can’t compromise R&D, for example.

                    None of this has anything to do with root on my machine though. Worst case scenario, I guess I infect everyone that happens to be on the VPN at the time and has a similar, unpatched vulnerability, which means a few days of everyone reinstalling stuff. That’s annoying, but we’re talking a week or so of productivity loss, and that’s about it. Having IT handle updates may reduce the chances of a successful attack, but it won’t do much to contain a successful attack.

                    If one machine is compromised, you have to assume all devices that machine can talk to are also compromised, so the best course of action is to reduce interaction between devices. Instead of IT spending their time validating and rolling out updates, I’d rather they spend time reducing the potential impact of a single point of failure. Our VPN currently isn’t a proper DMZ (I can access ports my coworkers open if I know their internal IP), and I’d rather they fix that than care about whether I have root access. There’s almost no reason I’d ever need to connect directly to a peer’s machine, so that should be a special, time-limited request, but I may need to grab a switch and bridge my machine’s network if I needed to test some IOT crap on a separate net (and I need root for that).

                    nextcloud/libreoffice is the actual company storage for “cloud”-like functions…

                    Nice, we use Google Drive (dev test data) and whatever MS calls their drive (Teams recordings, most shared docs, etc). The first is managed by our internal IT group and is mostly used w/ external teams (we have two groups), and the second is managed by our corporate IT group. I hate both, but it works I guess. We use Slack for internal team communication, and Teams for corporate stuff.

                    an archaic 80’s language (probably why compile times suck for us in general) into Rust

                    That’s not going to help the compile times. :)

                    I don’t use Rust at work (wish I did), but I do use it for personal projects (I’m building a P2P Lemmy alternative), and I’ve been able to keep build times reasonable. We’ll see what happens when SLOC increases, but I’m keeping an eye on projects like Cranelift.

                    I float more on the arch side of linux personally

                    That’s fair. I used Arch for a few years, but got tired of manually intervening when updates go sideways, especially Nvidia driver updates. openSUSE Tumbleweed’s openQA seemed to cut that down a bit, which is why I switched, and snapper made rollbacks painless when the odd Nvidia update borked stuff. I’m now on AMD GPUs, so update breakage has been pretty much non-existent. With some orchestration, Arch can be a solid server distro, I just personally want my desktop and servers to run the same family, and openSUSE was the only option that had rolling desktop and stable servers.

                    For servers, I used to use Debian, and all our infra uses either Debian or Ubuntu. If I was in charge, I’d probably migrate Ubuntu to MicroOS since we only need a container host anyway. I’m comfortable w/ apt, pacman, and zypper, and I’ve done my share of dpkg shenanigans as well (we did unattended Debian upgrades for an IOT project).

                    “even devs meet SCA”.

                    SCA is for payment services, no? I’m in the US, and this seems to be an EU thing I’m not very familiar with, but regardless, we don’t touch ecommerce at all, we’re B2B and all payments go through invoices.

                    The root restriction is simply so that you can’t disable the tools that prove the audit

                    If you’re worried someone will disable your tools, why would you hire them in the first place? Also, that should be painfully obvious because you wouldn’t get reporting updates, no?

                    We do auditing, and our devOPs team gets a weekly report from IT about any devices that aren’t updated yet or aren’t reporting. They also do a manual check every quarter or so to verify serials and version numbers and whatnot. I’ve gotten one notice from our local devOPs person, and very few of my team show up as well. The ones that do show up tend to be our UX and Product teams, and honestly, they have more access to interesting info than we devs do (i.e. they have planned features for the next 6 months, we just have the next month or so). And they need far fewer exceptions to the rules, since UX mostly just needs their design software and Product just needs office stuff and a browser.

                    I obviously can’t speak for all devs, but in general, devs tend to be more interested in applying updates in a timely manner and keeping things secure. In fact, I think all of my devs already used a password manager and MFA before starting, which absolutely isn’t the case for other positions.