Profil
A New Mindcraft Second? Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Link] 1. this WP article was the 5th in a series of articles following the safety of the web from its beginnings to relevant subjects of right this moment. discussing the safety of linux (or lack thereof) matches properly in there. it was also a well-researched article with over two months of analysis and interviews, something you cannot quite claim your self in your current pieces on the subject. you do not like the information? then say so. or even higher, do something constructive about them like Kees and others have been trying. nonetheless silly comparisons to old crap just like the Mindcraft research and fueling conspiracies do not precisely assist your case. 2. "We do a reasonable job of finding and fixing bugs." let's begin right here. is that this assertion based mostly on wishful thinking or chilly arduous facts you are going to share in your response? in response to Kees, the lifetime of security bugs is measured in years. that's more than the lifetime of many units folks purchase and use and ditch in that interval. 3. "Issues, whether they're safety-associated or not, are patched quickly," some are, some aren't: let's not overlook the latest NMI fixes that took over 2 months to trickle right down to stable kernels and we even have a person who has been ready for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-programs.btrfs/49500 (FYI, the overflow plugin is the first one Kees is trying to upstream, think about the shitstorm if bugreports shall be treated with this perspective, let's hope btrfs guys are an exception, not the rule). anyway, two examples are not statistics, so as soon as once more, do you will have numbers or is all of it wishful pondering? (it is partly a trick question as a result of you may even have to elucidate how one thing gets to be decided to be safety associated which as we all know is a messy enterprise within the linux world) 4. "and the stable-replace mechanism makes those patches out there to kernel customers." besides when it doesn't. and sure, i have numbers: grsec carries 200+ backported patches in our 3.14 stable tree. 5. "Specifically, the few developers who're working on this space have never made a serious attempt to get that work built-in upstream." you do not must be shy about naming us, in spite of everything you probably did so elsewhere already. and we also explained the reasons why we have not pursued upstreaming our code: https://lwn.internet/Articles/538600/ . since i don't anticipate you and your readers to read any of it, here is the tl;dr: if you'd like us to spend thousands of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that is how the world works, that's how >90% of linux code will get in too. i personally find it fairly hypocritic that effectively paid kernel builders are bitching about our unwillingness and inability to serve them our code on a silver platter for free. and earlier than someone brings up the CII, go examine their mail archives, after some initial exploratory discussions i explicitly requested them about supporting this lengthy drawn out upstreaming work and received no answers. Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Link] Money (aha) quote : > I propose you spend none of your free time on this. Zero. I suggest you receives a commission to do that. And well. Nobody expect you to serve your code on a silver platter free of charge. The Linux foundation and big companies using Linux (Google, Red Hat, Oracle, Samsung, and many others.) should pay safety specialists like you to upstream your patchs. Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Link] I would just like to point out that the way in which you phrased this makes your comment a tone argument[1][2]; you have (most likely unintentionally) dismissed the entire mother or father's arguments by pointing at its presentation. The tone of PAXTeam's comment shows the frustration constructed up through the years with the best way things work which I feel should be taken at face value, empathized with, and understood reasonably than merely dismissed. 1. http://rationalwiki.org/wiki/Tone_argument 2. http://geekfeminism.wikia.com/wiki/Tone_argument Cheers, Posted Nov 7, 2015 0:55 UTC (Sat) by josh (subscriber, #17465) [Hyperlink] Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (guest, #24616) [Link] why, is upstream identified for its primary civility and decency? have you even read the WP put up beneath dialogue, never mind past lkml visitors? Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Hyperlink] Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (visitor, #58961) [Link] No Argument Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Link] Please do not; it would not belong there either, and it especially does not need a cheering section because the tech press (LWN generally excepted) tends to offer. Posted Nov 8, 2015 8:36 UTC (Solar) by gmatht (visitor, #58961) [Hyperlink] Ok, however I used to be thinking of Linus Torvalds Posted Nov 8, 2015 16:11 UTC (Solar) by pbonzini (subscriber, #60935) [Hyperlink] Posted Nov 6, 2015 22:Forty three UTC (Fri) by PaXTeam (visitor, #24616) [Hyperlink] Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link] Why must you assume only money will fix this downside? Sure, I agree extra sources needs to be spent on fixing Linux kernel security issues, however don't assume somebody giving an organization (ahem, PAXTeam) money is the one solution. (Not mean to impugn PAXTeam's safety efforts.) The Linux growth group could have had the wool pulled over its collective eyes with respect to security points (both actual or perceived), however simply throwing money at the issue won't repair this. And sure, I do understand the business Linux distros do lots (most?) of the kernel growth these days, and that implies indirect financial transactions, however it is a lot more involved than simply that. Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (guest, #24616) [Link] Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link] Posted Nov 7, 2015 9:Forty nine UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink] Posted Nov 6, 2015 23:13 UTC (Fri) by dowdle (subscriber, #659) [Hyperlink] I believe you positively agree with the gist of Jon's argument... not sufficient focus has been given to security in the Linux kernel... the article gets that half right... cash hasn't been going towards security... and now it must. Aren't you glad? Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink] they talked to spender, not me personally, however sure, this facet of the coin is well represented by us and others who have been interviewed. the identical manner Linus is an efficient representative of, nicely, his personal pet project called linux. > And if Jon had solely talked to you, his would have been too. provided that i am the creator of PaX (part of grsec) yes, talking to me about grsec matters makes it top-of-the-line ways to research it. but when you understand of another person, be my visitor and name them, i am fairly sure the lately formed kernel self-safety people would be dying to have interaction them (or not, i do not think there is a sucker on the market with thousands of hours of free time on their hand). > [...]it additionally contained quite a couple of of groan-worthy statements. nothing is ideal but contemplating the viewers of the WP, this is one in every of the higher journalistic items on the subject, regardless of how you and others don't just like the sorry state of linux security exposed in there. if you'd like to discuss extra technical details, nothing stops you from talking to us ;). talking of your complaints about journalistic qualities, since a previous LWN article saw it fit to incorporate several typical dismissive claims by Linus about the quality of unspecified grsec options with no proof of what experience he had with the code and the way latest it was, how come we didn't see you or anyone else complaining about the standard of that article? > Aren't you glad? no, or not but anyway. i've heard numerous empty phrases through the years and nothing ever manifested or worse, all the money has gone to the pointless exercise of fixing particular person bugs and associated circus (that Linus rightfully despises FWIW). Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Hyperlink] Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Link] Right now we have received developers from large names saying that doing all that the Linux ecosystem does *safely* is an itch that they've. Unfortunately, the surrounding cultural angle of builders is to hit practical goals, and often performance goals. Security goals are sometimes neglected. Ideally, the culture would shift in order that we make it troublesome to observe insecure habits, patterns or paradigms -- that could be a process that can take a sustained effort, not merely the upstreaming of patches. Regardless of the tradition, these patches will go upstream ultimately anyway because the concepts that they embody are now well timed. I can see a strategy to make it happen: Linus will settle for them when an enormous finish-user (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'here's a set of improvements, we're already using them to solve this kind of downside, here's how all the things will remain working as a result of $proof, notice carefully that you are staring down the barrels of a fork as a result of your tree is now evolutionarily disadvantaged'. It is a game and may be gamed; I'd prefer that the community shepherds users to follow the sample of declaring drawback + answer + useful check evidence + performance check proof + security take a look at evidence. K3n. Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink] And about that fork barrel: I would argue it is the opposite approach around. Google forked and lost already. Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (guest, #99377) [Hyperlink] Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (guest, #33164) [Link] Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Hyperlink] So I have to confess to a specific amount of confusion. I could swear that the article I wrote stated exactly that, but you've got put a good amount of effort into flaming it...? Posted Nov 8, 2015 1:34 UTC (Solar) by PaXTeam (guest, #24616) [Link] Posted Nov 6, 2015 22:52 UTC (Fri) by flussence (subscriber, #85566) [Link] I personally suppose you and Nick Krause share reverse sides of the identical coin. Programming means and basic civility. Posted Nov 6, 2015 22:59 UTC (Fri) by dowdle (subscriber, #659) [Link] Posted Nov 7, 2015 0:16 UTC (Sat) by rahvin (visitor, #16953) [Hyperlink] I hope I'm unsuitable, but a hostile attitude isn't going to help anybody receives a commission. It is a time like this the place one thing you appear to be an "skilled" at and there's a demand for that experience where you display cooperation and willingness to take part as a result of it is an opportunity. I'm relatively shocked that someone would not get that, but I am older and have seen a number of of these opportunities in my career and exploited the hell out of them. You solely get a couple of of these in the typical career, and handful at the most. Generally you have to invest in proving your skills, and this is a type of moments. It appears the Kernel group could lastly take this security lesson to heart and embrace it, as stated in the article as a "mindcraft moment". This is a chance for developers that may want to work on Linux security. Some will exploit the opportunity and others will thumb their noses at it. In the end these builders that exploit the chance will prosper from it. I feel old even having to jot down that. Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink] Perhaps there is a rooster and egg downside right here, however when in search of out and funding people to get code upstream, it helps to pick out people and groups with a historical past of being able to get code upstream. It is perfectly cheap to prefer working out of tree, providing the flexibility to develop spectacular and significant safety advances unconstrained by upstream requirements. That is work someone might also wish to fund, if that meets their wants. Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (visitor, #24616) [Link] Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Hyperlink] You make this argument (implying you do analysis and Josh does not) after which fail to help it by any cite. It can be much more convincing should you give up on the Onus probandi rhetorical fallacy and truly cite details. > case in point, it was *them* who instructed that they wouldn't fund out-of-tree work however would consider funding upstreaming work, except when pressed for the small print, all i received was silence. For those following alongside at residence, this is the related set of threads: http://lists.coreinfrastructure.org/pipermail/cii-focus on... A fast precis is that they instructed you your venture was unhealthy as a result of the code was never going upstream. You advised them it was because of kernel builders perspective so they need to fund you anyway. They instructed you to submit a grant proposal, you whined extra about the kernel attitudes and ultimately even your apologist informed you that submitting a proposal could be the smartest thing to do. At that time you went silent, not vice versa as you suggest above. > clearly i won't spend time to put in writing up a begging proposal simply to be advised that 'no sorry, we don't fund multi-12 months projects at all'. that's one thing that one needs to be informed in advance (or heck, be a part of some public rules in order that others will know the rules too). You appear to have a fatally flawed grasp of how public funding works. If you do not tell individuals why you need the cash and the way you will spend it, they're unlikely to disburse. Saying I am good and I know the problem now hand over the money doesn't even work for most Lecturers who've a strong repute in the sphere; which is why most of them spend >30% of their time writing grant proposals. > as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not properly credited)? jejb@jarvis> git log|grep -i 'Writer: pax.*workforce'|wc -l 1 Stellar, I need to say. And before you mild off on these who have misappropriated your credit score, please remember that getting code upstream on behalf of reluctant or incapable actors is a hugely helpful and time consuming ability and one in all the reasons groups like Linaro exist and are well funded. If more of your stuff does go upstream, it is going to be because of the not inconsiderable efforts of other individuals on this space. You now have a business mannequin selling non-upstream security patches to clients. There's nothing flawed with that, it is a reasonably traditional first stage business model, nevertheless it does rather depend upon patches not being upstream in the primary place, calling into query the earnestness of your attempt to place them there. Now here's some free recommendation in my discipline, which is aiding corporations align their businesses in open source: The selling out of tree patch route is at all times an eventual failure, particularly with the kernel, because if the functionality is that useful, it gets upstreamed or reinvented in your regardless of, leaving you with nothing to promote. If your business plan B is promoting expertise, you have to keep in mind that it is going to be a hard sell when you've got no out of tree differentiator left and git historical past denies that you just had anything to do with the in-tree patches. In actual fact "crazy safety individual" will develop into a self fulfilling prophecy. The recommendation? it was obvious to everybody else who learn this, however for you, it is do the upstreaming your self before it gets performed for you. That approach you could have a reliable historical declare to Plan B and you may actually have a Plan A selling a rollup of upstream track patches built-in and delivered earlier than the distributions get around to it. Even your utility to the CII couldn't be dismissed as a result of your work wasn't going anywhere. Your alternative is to proceed taking part in the role of Cassandra and probably suffer her eventual destiny. Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink] > Second, for the potentially viable items this can be a multi-12 months > full time job. Is the CII prepared to fund initiatives at that degree? If not > we all would find yourself with a number of unfinished and partially damaged features. please present me the answer to that question. without a definitive 'sure' there is no point in submitting a proposal because that is the time-frame that in my view the job will take and any proposal with that requirement can be shot down instantly and be a waste of my time. and i stand by my claim that such easy basic requirements must be public data. > Stellar, I must say. "Lies, damned lies, and statistics". you realize there's a couple of approach to get code into the kernel? how about you utilize your git-fu to seek out all the bugreports/prompt fixes that went in due to us? as for specifically me, Greg explicitly banned me from future contributions via af45f32d25cc1 so it is no marvel i don't ship patches straight in (and that one commit you discovered that went in despite mentioned ban is definitely a very unhealthy instance as a result of it is also the one that Linus censored for no good reason and made me determine to by no means send security fixes upstream till that follow modifications). > You now have a enterprise model selling non-upstream security patches to clients. now? we've had paid sponsorship for our varied stable kernel collection for 7 years. i wouldn't call it a enterprise mannequin though as it hasn't paid anyone's bills. > [...]calling into question the earnestness of your try to put them there. i must be lacking one thing here but what try? i've never in my life tried to submit PaX upstream (for all the reasons discussed already). the CII mails had been exploratory to see how severe that complete group is about really securing core infrastructure. in a way i've bought my answers, there's nothing more to the story. as on your free advice, let me reciprocate: complex problems don't resolve themselves. code solving advanced issues would not write itself. people writing code solving advanced issues are few and much between that you'll find out briefly order. such folks (area experts) don't work for free with few exceptions like ourselves. biting the hand that feeds you'll only end you up in hunger. PS: since you're so positive about kernel developers' ability to reimplement our code, perhaps look at what parallel options i still maintain in PaX regardless of vanilla having a 'totally-not-reinvented-right here' implementation and try to know the rationale. or just take a look at all the CVEs that affected say vanilla's ASLR but didn't have an effect on mine. PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel safety is a aspect project when i am bored or just waiting for the next kernel to compile (i wish LTO was more efficient). Posted Nov 8, 2015 2:28 UTC (Solar) by jejb (subscriber, #6654) [Link] In other phrases, you tried to outline their process for them ... I can not assume why that would not work. > "Lies, damned lies, and statistics". The issue with ad hominem attacks is that they're singularly ineffective against a transparently factual argument. I posted a one line command anyone might run to get the variety of patches you've authored in the kernel. Why do not you submit an equal that provides figures you want extra? > i've by no means in my life tried to submit PaX upstream (for all the explanations discussed already). So the master plan is to exhibit your experience by the number of patches you have not submitted? nice plan, world domination beckons, sorry that one acquired away from you, but I'm certain you will not let it happen once more. Posted Nov 8, 2015 2:Fifty six UTC (Sun) by PaXTeam (visitor, #24616) [Hyperlink] what? since when does asking a query outline something? isn't that how we find out what someone else thinks? isn't that what *they* have that webform (never thoughts the mailing lists) for as well? in different words you admit that my query was not truly answered . > The issue with ad hominem attacks is that they're singularly ineffective towards a transparently factual argument. you didn't have an argument to start with, that's what i explained in the half you carefully selected to not quote. i'm not here to defend myself in opposition to your clearly idiotic makes an attempt at proving no matter you're making an attempt to prove, as they are saying even in kernel circles, code speaks, bullshit walks. you may take a look at mine and resolve what i can or cannot do (not that you have the knowledge to grasp most of it, mind you). that mentioned, there're clearly different extra capable individuals who have accomplished so and decided that my/our work was price something else nobody would have been feeding off of it for the past 15 years and still counting. and as unimaginable as it could seem to you, life doesn't revolve across the vanilla kernel, not everybody's dying to get their code in there especially when it means to place up with such foolish hostility on lkml that you just now additionally demonstrated right here (it is ironic how you came to the defense of josh who specifically requested folks to not deliver that notorious lkml type here. good job there James.). as for world domination, there're many ways to achieve it and one thing tells me that you are clearly out of your league here since PaX has already achieved that. you are working such code that implements PaX features as we communicate. Posted Nov 8, 2015 16:Fifty two UTC (Solar) by jejb (subscriber, #6654) [Link] I posted the one line git script giving your authored patches in response to this original request by you (this one, simply in case you've got forgotten http://lwn.net/Articles/663591/): > as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not properly credited)? I take it, by the best way you have shifted ground within the earlier threads, that you just want to withdraw that request? Posted Nov 8, 2015 19:31 UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink] Posted Nov 8, 2015 22:31 UTC (Sun) by pizza (subscriber, #46) [Hyperlink] Please present one that's not mistaken, or less flawed. It should take much less time than you've already wasted here. Posted Nov 8, 2015 22:Forty nine UTC (Solar) by PaXTeam (visitor, #24616) [Link] anyway, since it's you guys who've a bee in your bonnet, let's take a look at your level of intelligence too. first figure out my e-mail address and challenge name then strive to seek out the commits that say they come from there (it introduced back some reminiscences from 2004 already, how instances flies! i am stunned i truly managed to perform this a lot with explicitly not attempting, imagine if i did :). it's an incredibly complicated task so by conducting it you'll show yourself to be the highest canine right here on lwn, whatever that is price ;). Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Link] *shrug* Or don't; you're only sullying your individual fame. Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink] Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Hyperlink] I would not both Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Link] Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (visitor, #62367) [Hyperlink] Posted Nov 8, 2015 3:38 UTC (Solar) by PaXTeam (visitor, #24616) [Link] Posted Nov 12, 2015 13:Forty seven UTC (Thu) by nix (subscriber, #2304) [Hyperlink] Ah. I believed my memory wasn't failing me. Evaluate to PaXTeam's response to . PaXTeam shouldn't be averse to outright mendacity if it means he gets to look proper, I see. Perhaps PaXTeam's memory is failing, and this apparent contradiction just isn't a brazen lie, but given that the 2 posts had been made inside a day of each other I doubt it. (PaXTeam's total unwillingness to assume good faith in others deserves some reflection. Yes, I *do* suppose he is lying by implication here, and doing so when there's virtually nothing at stake. God alone is aware of what he is keen to stoop to when one thing *is* at stake. Gosh I ponder why his fixes aren't going upstream very quick.) Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (visitor, #24616) [Link] > and that one commit you found that went in regardless of said ban also someone's ban doesn't mean it's going to translate into another person's execution of that ban as it's clear from the commit in query. it's considerably sad that it takes a safety repair to expose the fallacy of this policy though. the remainder of your pithy advert hominem speaks for itself better than i ever might ;). Posted Nov 12, 2015 15:58 UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink] Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (visitor, #67268) [Hyperlink] I don't see this message in my mailbox, so presumably it obtained swallowed. Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink] You are conscious that it is entirely possible that everyone is wrong right here , right? That the kernel maintainers need to focus extra on safety, that the article was biased, that you are irresponsible to decry the state of safety, and do nothing to help, and that your patchsets wouldn't help that much and are the fallacious direction for the kernel? That simply because the kernel maintainers aren't 100% right it does not imply you're? Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (guest, #5770) [Hyperlink] I believe you might have him backwards there. Jon is evaluating this to Mindcraft as a result of he thinks that despite being unpalatable to loads of the community, the article might in reality include loads of truth. Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Hyperlink] Posted Nov 9, 2015 15:Thirteen UTC (Mon) by spender (visitor, #23067) [Hyperlink] "There are rumors of darkish forces that drove the article in the hopes of taking Linux down a notch. All of this could nicely be true" Just as you criticized the article for mentioning Ashley Madison although within the very first sentence of the following paragraph it mentions it did not involve the Linux kernel, you cannot give credence to conspiracy theories with out incurring the same criticism (in different words, you can't play the Glenn Beck "I'm simply asking the questions right here!" whose "questions" gasoline the conspiracy theories of others). Much like mentioning Ashley Madison for example for non-technical readers concerning the prevalence of Linux on the planet, if you are criticizing the mention then should not likening a non-FUD article to a FUD article also deserve criticism, especially given the rosy, self-congratulatory image you painted of upstream Linux security? Because the PaX Workforce pointed out in the preliminary publish, the motivations aren't hard to know -- you made no point out in any respect about it being the fifth in an extended-operating series following a pretty predictable time trajectory. No, we didn't miss the overall analogy you have been making an attempt to make, we just don't assume you possibly can have your cake and eat it too. -Brad Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Hyperlink] Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Link] It is gracious of you not to blame your readers. I determine they're a fair target: there's that line about these ignorant of history being condemned to re-implement Unix -- as your readers are! :-) K3n. Posted Nov 9, 2015 18:43 UTC (Mon) by bojan (subscriber, #14302) [Link] Unfortunately, I do not understand neither the "security" people (PaXTeam/spender), nor the mainstream kernel people when it comes to their angle. I confess I have totally no technical capabilities on any of those topics, but when all of them determined to work together, as a substitute of getting infinite and pointless flame wars and blame game exchanges, a number of the stuff would have been completed already. And all the whereas everybody involved may have made another massive pile of cash on the stuff. All of them appear to need to have a better Linux kernel, so I've acquired no thought what the problem is. It appears that evidently no person is prepared to yield any of their positions even a bit bit. As an alternative, both sides look like bent on attempting to insult their way into forcing the opposite side to hand over. Which, of course, never works - it simply causes extra pushback. Perplexing stuff... Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Link] Posted Nov 9, 2015 19:Forty four UTC (Mon) by bojan (subscriber, #14302) [Link] Take a scientific computational cluster with an "air gap", as an example. You'd probably need most of the security stuff turned off on it to realize most efficiency, because you may belief all customers. Now take a number of billion mobile phones which may be difficult or slow to patch. You'd probably need to kill lots of the exploit classes there, if those gadgets can still run moderately nicely with most safety features turned on. So, it's not both/or. It is most likely "it depends". However, if the stuff is not there for everybody to compile/use within the vanilla kernel, it is going to be more difficult to make it a part of everyday selections for distributors and users. Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Hyperlink] How unhappy. This Dijkstra quote involves thoughts instantly: Software program engineering, of course, presents itself as another worthy trigger, however that is eyewash: when you carefully read its literature and analyse what its devotees really do, you will uncover that software engineering has accepted as its charter "The best way to program if you cannot." Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link] I suppose that reality was too unpleasant to fit into Dijkstra's world view. Posted Nov 7, 2015 10:52 UTC (Sat) by ms (subscriber, #41272) [Link] Indeed. And the attention-grabbing thing to me is that once I reach that time, checks are usually not sufficient - model checking at a minimal and actually proofs are the only approach forwards. I'm no security expert, my field is all distributed methods. I perceive and have applied Paxos and i imagine I can explain how and why it really works to anybody. However I'm presently doing a little algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No take a look at is sufficient as a result of there are infinite interleavings of occasions and my head just could not cope with working on this either at the pc or on paper - I discovered I couldn't intuitively reason about this stuff at all. So I began defining the properties and wanted and step-by-step proving why each of them holds. Without my notes and proofs I can't even explain to myself, let alone anybody else, why this factor works. I find this both fully apparent that this may happen and utterly terrifying - the upkeep cost of these algorithms is now an order of magnitude higher. Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Hyperlink] > Indeed. And the interesting thing to me is that after I reach that time, assessments will not be sufficient - mannequin checking at a minimum and actually proofs are the only way forwards. Or are you simply utilizing the flawed maths? Hobbyhorse time once more :-) however to quote a fellow Choose developer ... "I typically stroll right into a SQL improvement store and see that wall - you know, the one with the large SQL schema that no-one totally understands on it - and surprise how I can easily hold the complete schema for a Pick database of the identical or greater complexity in my head". However it is simple - by schooling I am a Chemist, by curiosity a Physical Chemist (and by profession an unemployed programmer :-). And when I am occupied with chemistry, I can ask myself "what's an atom made of" and suppose about things just like the sturdy nuclear pressure. Next stage up, how do atoms stick collectively and make molecules, and assume about the electroweak drive and electron orbitals, and how do chemical reactions occur. Then I think about molecules stick collectively to make supplies, and think about metals, and/or Van de Waals, and stuff. Point is, you have to *layer* stuff, and have a look at things, and say "how can I split components off into 'black boxes' so at any one stage I can assume the other ranges 'simply work'". For example, with Pick a FILE (table to you) shops a category - a set of an identical objects. One object per File (row). And, identical as relational, one attribute per Field (column). Are you able to map your relational tables to reality so easily? :-) Going back THIRTY years, I remember a narrative about a man who constructed little computer crabs, that might quite fortunately scuttle around within the surf zone. As a result of he did not attempt to work out how to resolve all the problems directly - every of his (incredibly puny by today's standards - that is the 8080/Z80 era!) processors was set to only process a little bit little bit of the problem and there was no central "brain". However it worked ... Possibly you must simply write a bunch of small modules to unravel every particular person drawback, and let final answer "just occur". Cheers, Wol Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (visitor, #60862) [Hyperlink] To my understanding, this is exactly what a mathematical abstraction does. For instance in Z notation we might assemble schemas for the varied modifying ("delta") operations on the bottom schema, after which argue about preservation of formal invariants, properties of the outcome, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A via O (for which they've been already argued). The result is a set of operations that, executed in arbitrary order, end in a set of properties holding for the result and outputs. Thus proving the formal design right (w/ caveat lectors regarding scope, correspondence with its implementation [although that can be confirmed as nicely], and browse-solely ["xi"] operations). Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link] Wanting by way of the historical past of computing (and possibly loads of other fields too), you will in all probability find that folks "can't see the wooden for the timber" more usually that not. They dive into the element and completely miss the massive picture. (Drugs, and curiosity of mine, suffers from that too - I remember any person talking in regards to the guide desirous to amputate a gangrenous leg to save someone's life - oblivious to the fact that the affected person was dying of most cancers.) Cheers, Wol Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link] https://www.youtube.com/watch?v=VpuVDfSXs-g (LCA 2015 - "Programming Considered Harmful") FWIW, I think that this discuss may be very related to why writing safe software program is so exhausting.. -Dave. Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Hyperlink] While we are spending tens of millions at a large number of security problems, kernel issues usually are not on our high-priority list. Actually I remember only once having discussing a kernel vulnerability. The results of the analysis has been that all our methods were working kernels that have been older as the kernel that had the vulnerability. However "patch management" is an actual concern for us. Software program should proceed to work if we set up safety patches or update to new releases due to the tip-of-life coverage of a vendor. The revenue of the corporate is relying on the IT methods running. So "not breaking person space" is a security function for us, because a breakage of 1 component of our several ten hundreds of Linux programs will stop the roll-out of the safety update. One other downside is embedded software program or firmware. Today nearly all hardware systems include an operating system, usually some Linux model, providing a fill community stack embedded to assist distant management. Frequently these methods don't survive our obligatory safety scan, as a result of vendors still didn't update the embedded openssl. The true challenge is to supply a software stack that can be operated in the hostile surroundings of the Internet maintaining full system integrity for ten years or even longer with none buyer upkeep. The current state of software program engineering would require help for an automatic update course of, but distributors should understand that their business mannequin must be capable to finance the resources providing the updates. General I am optimistic, networked software isn't the first know-how used by mankind inflicting problems that have been addressed later. Steam engine use might lead to boiler explosions but the "engineers" had been able to cut back this risk considerably over just a few many years. Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Hyperlink] The next is all guess work; I would be eager to know if others have evidence either one way or one other on this: The people who learn how to hack into these techniques by means of kernel vulnerabilities know that they skills they've learnt have a market. Thus they don't are inclined to hack with the intention to wreak havoc - certainly on the entire the place knowledge has been stolen in order to release and embarrass people, it _seems_ as if those hacks are via a lot simpler vectors. I.e. lesser skilled hackers find there's an entire load of low-hanging fruit which they'll get at. They're not being paid ahead of time for the data, in order that they turn to extortion as a substitute. They don't cowl their tracks, and they'll usually be found and charged with criminal offences. So in case your safety meets a certain primary stage of proficiency and/or your company isn't doing anything that puts it near the top of "companies we might like to embarrass" (I believe the latter is much more effective at holding programs "safe" than the former), then the hackers that get into your system are prone to be skilled, paid, and probably not going to do a lot damage - they're stealing information for a competitor / state. So that does not hassle your bottom line - at the least not in a way which your shareholders will remember of. So why fund security? Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (visitor, #82661) [Hyperlink] Then again, some effective mitigation in kernel stage could be very useful to crush cybercriminal/skiddie's strive. If one in every of your buyer running a future buying and selling platform exposes some open API to their shoppers, and if the server has some reminiscence corruption bugs may be exploited remotely. Then you know there are recognized attack strategies( reminiscent of offset2lib) may help the attacker make the weaponized exploit so much simpler. Will you explain the failosophy "A bug is bug" to your buyer and inform them it would be ok? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp. To probably the most industrial makes use of, extra security mitigation within the software program won't cost you more price range. You'll nonetheless must do the regression test for every upgrade. Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link] Needless to say I concentrate on exterior web-primarily based penetration-tests and that in-house checks (local LAN) will doubtless yield completely different results. Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link] I keep studying this headline as "a brand new Minecraft second", and thinking that perhaps they've decided to observe up the .Net thing by open-sourcing Minecraft. Oh effectively. I mean, safety is nice too, I suppose. Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink] Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link] Posted Nov 8, 2015 10:34 UTC (Sun) by jcm (subscriber, #18262) [Hyperlink] Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink] Posted Nov 9, 2015 15:Fifty three UTC (Mon) by neiljerram (subscriber, #12005) [Hyperlink] (Oh, and I was also nonetheless wondering how Minecraft had taught us about Linux performance - so because of the other remark thread that identified the 'd', not 'e'.) Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (visitor, #4654) [Link] I'd identical to so as to add that in my view, there is a general downside with the economics of laptop security, which is very seen currently. Two problems even perhaps. First, the money spent on laptop safety is usually diverted in the direction of the so-referred to as security "circus": fast, easy options which are primarily selected simply in order to "do one thing" and get better press. It took me a long time - perhaps decades - to claim that no safety mechanism in any respect is healthier than a bad mechanism. However now I firmly imagine in this angle and would slightly take the danger knowingly (provided that I can save money/resource for myself) than take a foul approach at fixing it (and haven't any cash/resource left once i understand I should have accomplished something else). And i find there are numerous unhealthy or incomplete approaches currently out there in the computer safety field. Those spilling our uncommon cash/resources on prepared-made ineffective tools should get the unhealthy press they deserve. And, we certainly must enlighten the press on that because it is not really easy to appreciate the efficiency of protection mechanisms (which, by definition, should stop things from occurring). Second, and that may be newer and more worrying. The flow of money/resource is oriented within the course of assault instruments and vulnerabilities discovery much greater than within the direction of latest protection mechanisms. This is particularly worrying as cyber "protection" initiatives look more and more like the standard idustrial projects geared toward producing weapons or intelligence systems. Furthermore, dangerous useless weapons, as a result of they're only working towards our very vulnerable current programs; and dangerous intelligence techniques as even fundamental faculty-level encryption scares them down to useless. Nevertheless, all the ressources are for these grownup teenagers playing the white hat hackers with not-so-troublesome programming tips or network monitoring or WWI-degree cryptanalysis. And now also for the cyberwarriors and cyberspies that have but to prove their usefulness entirely (especially for peace safety...). Personnally, I'd happily go away them all the hype; but I will forcefully declare that they don't have any right in any way on any of the budget allocation decisions. Solely those working on safety should. And yep, it means we should decide the place to place there sources. We've got to say the unique lock for ourselves this time. (and I assume the PaXteam might be amongst the primary to learn from such a change). While desirous about it, I would not even leave white-hat or cyber-guys any hype in the long run. That is more publicity than they deserve. I crave for the day I will read within the newspaper that: "One other of these ill advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well known virus program code exploiting a programmer mistake and managed however to convey a kind of unfinished and dangerous quality applications, X, that we are all obliged to use to its knees, annoying millions of regular customers together with his unfortunate cyber-vandalism. All the safety consultants unanimously advocate that, once once more, the finances of the cyber-command be retargetted, or no less than leveled-off, so as to carry more security engineer positions in the educational domain or civilian industry. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional in this affair." Hmmm - cyber-hooligans - I just like the label. Though it doesn't apply properly to the battlefield-oriented variant. Posted Nov 9, 2015 14:28 UTC (Mon) by drag (visitor, #31333) [Link] The state of 'software program safety industry' is a f-ng catastrophe. Failure of the best order. There is massive quantities of cash that goes into 'cyber safety', but it's normally spent on authorities compliance and audit efforts. This means as an alternative of really putting effort into correcting issues and mitigating future problems, nearly all of the effort goes into taking present purposes and making them conform to committee-pushed pointers with the minimal amount of effort and adjustments. Some stage of regulation and standardization is completely wanted, but lay persons are clueless and are fully unable to discern the distinction between someone who has worthwhile expertise versus some company that has spent thousands and thousands on slick marketing and 'native advertising' on giant websites and pc magazines. The people with the money unfortunately only have their own judgment to rely on when shopping for into 'cyber security'. > These spilling our uncommon money/assets on ready-made ineffective tools should get the bad press they deserve. There is no such factor as 'our uncommon money/resources'. You may have your cash, I have mine. Money being spent by some company like Redhat is their money. Cash being spent by governments is the federal government's cash. (you, actually, have way more management in how Walmart spends it is money then over what your authorities does with their's) > This is especially worrying as cyber "protection" initiatives look more and more like the usual idustrial initiatives aimed at producing weapons or intelligence programs. Furthermore, unhealthy useless weapons, as a result of they're only working towards our very vulnerable present programs; and unhealthy intelligence programs as even primary faculty-stage encryption scares them down to useless. Having secure software with robust encryption mechanisms in the hands of the general public runs counter to the pursuits of most main governments. Governments, like every other for-profit organization, are primarily concerned about self-preservation. Cash spent on drone initiatives or banking auditing/oversight regulation compliance is Far more priceless to them then trying to help the public have a safe mechanism for making telephone calls. Especially when those secure mechanisms interfere with information collection efforts. Sadly you/I/us can't rely upon some magical benefactor with deep pockets to sweep in and make Linux better. It's just not going to occur. Firms like Redhat have been massively helpful to spending assets to make Linux kernel extra capable.. nonetheless they're driven by a the necessity to show a revenue, which implies they need to cater directly to the the kind of necessities established by their customer base. Customers for EL are usually rather more centered on reducing costs related to administration and software growth then security on the low-level OS. Enterprise Linux customers are inclined to rely on physical, human coverage, and network safety to protect their 'comfortable' interiors from being exposed to external threats.. assuming (rightly) that there is very little they'll do to really harden their systems. In actual fact when the selection comes between safety vs convenience I'm sure that the majority prospects will happily defeat or strip out any security mechanisms launched into Linux. On prime of that when most Enterprise software program is extraordinarily bad. A lot so that 10 hours spent on improving a web front-finish will yield more real-world security advantages then a 1000 hours spent on Linux kernel bugs for many companies. Even for 'normal' Linux customers a security bug in their Firefox's NAPI flash plugin is far more devastating and poses a massively increased risk then a obscure Linux kernel buffer over move drawback. Minecraft Servers It's just not likely necessary for attackers to get 'root' to get entry to the vital data... typically all of which is contained in a single consumer account. Finally it is as much as individuals such as you and myself to place the effort and cash into bettering Linux safety. For both ourselves and different individuals. Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Link] Spilling has all the time been the case, however now, to me and in pc security, most of the money seems spilled because of bad faith. And this is mostly your money or mine: both tax-fueled governemental assets or corporate prices that are straight reimputed on the costs of products/software we are informed we're *obliged* to purchase. (Have a look at company firewalls, dwelling alarms or antivirus software marketing discourse.) I feel it's time to level out that there are a number of "malicious malefactors" around and that there is an actual have to determine and sanction them and confiscate the assets they've somehow managed to monopolize. And i do *not* assume Linus is among such culprits by the best way. However I feel he could also be amongst the ones hiding their heads within the sand in regards to the aforementioned evil actors, whereas he probably has more leverage to counteract them or oblige them to reveal themselves than many people. I discover that to be of brown-paper-bag level (although head-in-the-sand is someway a brand new interpretation). In the end, I believe you are proper to say that presently it's only up to us people to strive truthfully to do one thing to improve Linux or pc safety. But I still assume that I'm proper to say that this isn't regular; especially while some very critical people get very severe salaries to distribute randomly some tough to evaluate budgets. [1] A paradoxical situation once you give it some thought: in a website where you are at the beginning preoccupied by malicious individuals everybody should have factual, transparent and trustworthy conduct as the primary priority in their mind. Posted Nov 9, 2015 15:47 UTC (Mon) by MarcB (subscriber, #101804) [Hyperlink] It even has a nice, seven line Basic-pseudo-code that describes the present scenario and clearly shows that we are caught in an endless loop. It doesn't answer the large query, although: How to write down higher software program. The unhappy factor is, that this is from 2005 and all the things that were clearly stupid ideas 10 years ago have proliferated much more. Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (guest, #4654) [Link] Word IMHO, we should always examine additional why these dumb things proliferate and get so much support. If it is only human psychology, well, let's battle it: e.g. Mozilla has shown us that they can do wonderful things given the proper message. If we are dealing with active people exploiting public credulity: let's identify and battle them. However, extra importantly, let's capitalize on this data and safe *our* programs, to showcase at a minimum (and extra later on in fact). Your reference conclusion is particularly good to me. "challenge [...] the standard knowledge and the established order": that job I would happily accept. Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link] That rant is itself a bunch of "empty calories". The converse to the gadgets it rants about, which it is suggesting at some level, could be as dangerous or worse, and indicative of the worst type of safety pondering that has put a lot of people off. Alternatively, it's only a rant that gives little of worth. Personally, I feel there is no magic bullet. Security is and always has been, in human historical past, an arms race between defenders and attackers, and one that's inherently a trade-off between usability, risks and costs. If there are mistakes being made, it's that we should probably spend extra sources on defences that would block whole lessons of attacks. E.g., why is the GRSec kernel hardening stuff so hard to use to common distros (e.g. there is not any dependable supply of a GRSec kernel for Fedora or RHEL, is there?). Why does the entire Linux kernel run in one safety context? Why are we nonetheless writing numerous software program in C/C++, usually with none fundamental security-checking abstractions (e.g. basic bounds-checking layers in between I/O and parsing layers, say)? Can hardware do extra to supply security with pace? Little question there are lots of individuals working on "block courses of attacks" stuff, the query is, why aren't there extra resources directed there? Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Hyperlink] >There are a number of explanation why Linux lags behind in defensive security applied sciences, but one of the important thing ones is that the companies getting cash on Linux haven't prioritized the development and integration of those applied sciences. This looks like a reason which is admittedly value exploring. Why is it so? I think it's not apparent why this would not get some more consideration. Is it possible that the people with the cash are right not to extra highly prioritise this? Afterall, what interest have they got in an unsecure, exploitable kernel? Where there may be widespread cause, linux improvement will get resourced. It has been this way for a few years. If filesystems qualify for widespread curiosity, absolutely safety does. So there does not appear to be any obvious motive why this problem doesn't get more mainstream consideration, besides that it actually already gets sufficient. Chances are you'll say that disaster has not struck yet, that the iceberg has not been hit. Nevertheless it seems to be that the linux improvement course of is just not overly reactive elsewhere. Posted Nov 10, 2015 15:53 UTC (Tue) by raven667 (subscriber, #5198) [Link] That is an interesting query, actually that is what they really believe regardless of what they publicly say about their dedication to security applied sciences. What's the actually demonstrated draw back for Kernel builders and the organizations that pay them, as far as I can inform there just isn't sufficient consequence for the lack of Security to drive extra funding, so we're left begging and cajoling unconvincingly. Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (guest, #4654) [Hyperlink] The key subject with this domain is it pertains to malicious faults. So, when penalties manifest themselves, it is too late to act. And if the current dedication to a lack of voluntary technique persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia. Admittedly, kernel developpers seem fairly resistant to paranoia. That is an effective thing. However I'm ready for the times the place armed land-drones patrol US streets within the neighborhood of their children schools for them to discover the feeling. They are not so distants the days when innocent lives will unconsciouly rely on the safety of (linux-based mostly) computer methods; under water, that's already the case if I remember appropriately my last dive, in addition to in a number of latest automobiles in line with some stories. Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Link] Basic internet hosting corporations that use Linux as an uncovered entrance-finish system are retreating from improvement whereas HPC, mobile and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel of their instructions. This is de facto not that stunning: For internet hosting needs the kernel has been "finished" for fairly a while now. In addition to help for present hardware there just isn't a lot use for newer kernels. Linux 3.2, and even older, works simply positive. Internet hosting does not want scalability to a whole bunch or 1000's of CPU cores (one makes use of commodity hardware), complicated instrumentation like perf or tracing (programs are locked down as much as potential) or advanced energy-management (if the system does not have fixed high load, it is not making sufficient cash). So why ought to hosting corporations still make sturdy investments in kernel improvement? Even if they had something to contribute, the hurdles for contribution have grow to be increased and better. For their security wants, internet hosting companies already use Grsecurity. I don't have any numbers, but some experience suggests that Grsecurity is principally a fixed requirement for shared internet hosting. On the other hand, kernel safety is sort of irrelevant on nodes of an excellent laptop or on a system working large enterprise databases which are wrapped in layers of middle-ware. And mobile vendors simply don't care. Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink] Linking Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Link] Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Link] The assembled doubtless recall that in August 2011, kernel.org was root compromised. I am positive the system's exhausting drives had been sent off for forensic examination, and we've all been ready patiently for the reply to crucial query: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, proper via April 1st, 2013, kernel.org included this observe at the top of the site News: 'Because of all to your patience and understanding throughout our outage and please bear with us as we carry up the totally different kernel.org techniques over the subsequent few weeks. We will be writing up a report on the incident in the future.' (Emphasis added.) That remark was eliminated (together with the remainder of the positioning Information) during a Could 2013 edit, and there hasn't been -- to my data -- a peep about any report on the incident since then. This has been disappointing. When the Debian Challenge discovered sudden compromise of a number of of its servers in 2007, Wichert Akkerman wrote and posted an excellent public report on precisely what happened. Likewise, the Apache Basis likewise did the suitable thing with good public autopsies of the 2010 Net site breaches. Arstechnica's Dan Goodin was nonetheless attempting to comply with up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years in the past. He wrote: Linux developer and maintainer Greg Kroah-Hartman instructed Ars that the investigation has but to be completed and gave no timetable for when a report is perhaps launched. [...] Kroah-Hartman also advised Ars kernel.org methods were rebuilt from scratch following the attack. Officials have developed new instruments and procedures since then, but he declined to say what they're. "There will be a report later this year about site [sic] has been engineered, however don't quote me on when will probably be released as I'm not chargeable for it," he wrote. Who's responsible, then? Is anyone? Anybody? Bueller? Or is it a state secret, or what? Two years since Greg Okay-H stated there can be a report 'later this 12 months', and four years for the reason that meltdown, nothing but. How about some info? Rick Moen rick@linuxmafia.com Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (visitor, #4654) [Hyperlink] Less significantly, word that if even the Linux mafia doesn't know, it should be the venusians; they're notoriously stealth in their invasions. Posted Nov 14, 2015 12:46 UTC (Sat) by error27 (subscriber, #8346) [Link] I know the kernel.org admins have given talks about a few of the new protections which have been put into place. There are no extra shell logins, instead every thing uses gitolite. The different providers are on totally different hosts. There are extra kernel.org staff now. People are using two factor identification. Another stuff. Do a seek for Konstantin Ryabitsev. Posted Nov 14, 2015 15:58 UTC (Sat) by rickmoen (subscriber, #6943) [Link] I beg your pardon if I used to be one way or the other unclear: That was mentioned to have been the trail of entry to the machine (and that i can readily believe that, because it was additionally the precise path to entry into shells.sourceforge.net, a few years prior, around 2002, and into many other shared Web hosts for a few years). But that is not what is of primary curiosity, and is not what the forensic examine long promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator within the August 2011 Dan Goodin article you cited: 'How they managed to exploit that to root entry is at present unknown and is being investigated'. Okay, of us, you've got now had four years of investigation. What was the path of escalation to root? (Additionally, other details that will logically be covered by a forensic study, corresponding to: Whose key was stolen? Who stole the important thing?) This is the kind of autopsy was promised prominently on the front page of kernel.org, to reporters, and elsewhere for a very long time (after which summarily eliminated as a promise from the entrance page of kernel.org, without remark, together with the remainder of the positioning News part, and apparently dropped). It still would be appropriate to know and share that knowledge. Especially the datum of whether or not the trail to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen rick@linuxmafia.com Posted Nov 22, 2015 12:Forty two UTC (Solar) by rickmoen (subscriber, #6943) [Link] I've accomplished a closer overview of revelations that came out quickly after the break-in, and assume I've discovered the answer, by way of a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell customers (two days earlier than the general public was knowledgeable), plus Aug. 31st feedback to The Register's Dan Goodin by 'two security researchers who had been briefed on the breach': Root escalation was through exploit of a Linux kernel safety gap: Per the two safety researchers, it was one each extremely embarrassing (huge-open access to /dev/mem contents together with the working kernel's picture in RAM, in 2.6 kernels of that day) and identified-exploitable for the prior six years by canned 'sploits, one in all which (Phalanx) was run by some script kiddie after entry utilizing stolen dev credentials. Different tidbits: - Site admins left the basis-compromised Internet servers working with all companies nonetheless lit up, for a number of days. - Site admins and Linux Basis sat on the data and failed to inform the public for those self same a number of days. - Site admins and Linux Basis have never revealed whether trojaned Linux source tarballs have been posted within the http/ftp tree for the 19+ days earlier than they took the site down. (Sure, git checkout was high quality, however what in regards to the hundreds of tarball downloads?) - After promising a report for a number of years after which quietly eradicating that promise from the front web page of kernel.org, Linux Foundation now stonewalls press queries. I posted my best attempt at reconstructing the story, absent an actual report from insiders, to SVLUG's primary mailing list yesterday. (Necessarily, there are surmises. If the individuals with the facts have been more forthcoming, we might know what happened for certain.) I do should surprise: If there's one other embarrassing screwup, will we even be informed about it at all? Rick Moen rick@linuxmafia.com Posted Nov 22, 2015 14:25 UTC (Sun) by spender (guest, #23067) [Link] Additionally, it's preferable to use live memory acquisition previous to powering off the system, otherwise you lose out on memory-resident artifacts that you may perform forensics on. -Brad How in regards to the lengthy overdue autopsy on the August 2011 kernel.org compromise? Posted Nov 22, 2015 16:28 UTC (Sun) by rickmoen (subscriber, #6943) [Hyperlink] Thanks to your comments, Brad. I'd been counting on Dan Goodin's claim of Phalanx being what was used to realize root, in the bit the place he cited 'two safety researchers who have been briefed on the breach' to that effect. Goodin also elaborated: 'Fellow security researcher Dan Rosenberg mentioned he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the first time I've heard of a rootkit being claimed to be bundled with an assault instrument, and that i noted that oddity in my posting to SVLUG. That having been mentioned, yeah, the Phalanx README doesn't particularly claim this, so then possibly Goodin and his several 'safety researcher' sources blew that element, and no person but kernel.org insiders yet knows the escalation path used to gain root. Additionally, it's preferable to use live reminiscence acquisition prior to powering off the system, otherwise you lose out on reminiscence-resident artifacts you can perform forensics on. Arguable, however a tradeoff; you can poke the compromised live system for state knowledge, but with the downside of leaving your system working underneath hostile control. I was at all times taught that, on stability, it's higher to tug power to end the intrusion. Rick Moen rick@linuxmafia.com Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (visitor, #88005) [Link] Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Hyperlink] With "something" you imply those who produce these closed supply drivers, right? If the "client product corporations" simply caught to utilizing parts with mainlined open supply drivers, then updating their merchandise could be much simpler. A brand new Mindcraft second? Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink] They've ring zero privilege, can entry protected reminiscence directly, and can't be audited. Trick a kernel into operating a compromised module and it is sport over. Even tickle a bug in a "good" module, and it is most likely sport over - in this case quite actually as such modules are typically video drivers optimised for video games ...
Rôle sur le forum: Participant
Sujets lancés : 0
Réponse créées: 0