Doesn’t appear to show any charts on Chrome for mobile…
Seems to be a responsiveness issue, because it goes away in landscape mode, and the charts show.
Doesn’t appear to show any charts on Chrome for mobile…
Seems to be a responsiveness issue, because it goes away in landscape mode, and the charts show.
They work great when you have many teams working alongside each other within the same product.
It helps immensely with having consistent quality, structure, shared code, review practices, CI/CD…etc
The downside is that you essentially need an entire platform engineering team just to set up and maintain the monorepo, tooling, custom scripts, custom workflows…etc that support all the additional needs a monorepo and it’s users have. Something that would never be a problem on a single repository like the list of pull requests maybe something that needs custom processes and workflows for in a monorepo due to the volume of changes.
(Ofc small mono repos don’t require you to have a full team doing maintenance and platform engineering. But often you’ll still find yourself dedicating an entire FTE worth of time towards it)
It’s similar to microservices in that monorepo is a solution to scaling an organizational problem, not a solution to scaling a technology problem. It will create new problems that you have to solve that you would not have had to solve before. And that solution requires additional work to be effective and ergonomic. If those ergonomic and consistency issues aren’t being solved then it will just devolve over time into a mess.
Yeah, but that’s not what we’re talking about here.
RTF has many more features than markdown can reasonably support, even with your personal, custom, syntaxes that no one else knows :/
I use markdown for everything, as much as possible, but in the context of creating a RTF WYSIWYG editor with non-trivial layout & styling needs it’s a no go.
There are markup languages for this purpose. And you store the rich text as normal text in that markup language. For the most part.
It’s typically an XML or XML-like language, or bb-codes. MS Word for example uses XML to store the markup data for the rich text.
Simpler and more limited text needs tend to use markdown these days, like Lemmy, or most text fields on GitHub.
There’s no need to include complex technology stacks into it!
Now the real hard part is the rendering engine for WYSIWYG. That’s a nightmare.
The ecosystem is really it, C# as a language isn’t the best, objectively Typescript is a much more developer friendly and globally type safe (at design time) language. It’s far more versatile than C# in that regard, to the point where there is almost no comparison.
But holy hell the .Net ecosystem is light-years ahead, it’s so incredibly consistent across major versions, is extremely high quality, has consistent and well considered design advancements, and is absolutely bloody fast. Tie that in with first party frameworks that cover most of all major needs, and it all works together so smoothly, at least for web dev.
Holy shit that’s completely wrong.
It’s for sure AI generated articles. Time to block softonic.
This is a weird take given that the majority of projects relevant to this article are massive projects with hundreds or thousands of developers working on them, over time periods that can measure in decades.
Pretending those don’t exist and imagining fantasy scenarios where all large projects are made up of small modular pieces (while conveniently making no mention to all of the new problems this raises in practice).
Replace functions replace files and rewrite modules, that’s expected and healthy for any project. This article is referring to the tendency for programmers to believe that an entire project should be scrapped and rewritten from scratch. Which seems to have nothing to do with your comment…?
This thread is a great example to why despite sharing knowledge we continually fail to write software effectively.
The person you’re arguing with just doesn’t get it. They have their own reality.
I have a weird knack for reverse engineering, and reverse engineering stuff I’ve written 7-10 years ago is even easier!
I tend to be able to find w/e snippet I’m looking for fast enough that I can’t be assed to do it right yet 😆
That’s one of the selling points, yep
To be fair Microsoft has been working on Garnet for something like 4+ years and have already adopted it internally to reduce infrastructure costs.
Which has been their MO for the last few years. Improve .Net baseline performance, build high performance tools on top of it, dog food them, and then release them under open source licenses.
Great timing that Microsoft just released a drop-in replacement that’s in order of magnitude faster: https://github.com/microsoft/garnet
Written in C# too, so it’s incredibly easy to extend and write performant functions for.
It needs to be a bit more deployable though but they only just opened the repo, so I’ll wait.
The designers as seen by designers is so right.
Nothing they come up with can be wrong, it’s all innovative!!
.Net 8 will work on Linux just fine. But winforms will not, it’s specifically a legacy windows-only UI framework.
You’re going to have to jump through some incredible hoops to get it to work on Linux. Which are definitely not part of your normal curriculum.
C# on non-Windows is not impossible, but it’s going to require effort infeasible for school projects like that one.
You mean winforms (The windows specific UI) on non-Windows? Otherwise this is incredibly misleading, and plain wrong.
C# in non windows is the norm, the default even, these days. I build, compile, and run, my C# applications in linux , and have been for the last 5+ years.
I go full chaos and look up where I last used it when I need a snippet…
The follow on. Lots and LOTS of unrelated changes can be a symptom of an immature codebase/product, simply a new endeavor.
If it’s a greenfield project, in order to move fast you don’t want to gold plate or over predictive future. This often means you run into misc design blockers constantly. Which often necessitate refactors & improvements along the way. Depending on the team this can be broken out into the refactor, then the feature, and reviewed back-to-back. This does have it’s downsides though, as the scope of the design may become obfuscated and may lead to ineffective code review.
Ofc mature codebases don’t often suffer from the same issues, and most of the foundational problems are solved. And patterns have been well established.
/ramble
There is no context here though?
If this is a breaking change to a major upgrade path, like a major base UI lib change, then it might not be possible to be broken down into pieces without tripping or quadrupling the work (which likely took a few folks all month to achieve already).
I remember in a previous job migrating from Vue 1 to Vue 2. And upgrading to an entirely new UI library. It required partial code freezes, and we figured it had to be done in 1 big push. It was only 3 of us doing it while the rest of the team kept up on maintenance & feature work.
The PR was something like 38k loc, of actual UI code, excluding package/lock files. It took the team an entire dedicated week and a half to review, piece by piece. We chewet through hundreds of comments during that time. It worked out really well, everyone was happy, the timelines where even met early.
The same thing happened when migrating an asp.net .Net Framework 4.x codebase to .Net Core 3.1. we figured that bundling in major refactors during the process to get the biggest bang for our buck was the best move. It was some light like 18k loc. Which also worked out similarly well in the end .
Things like this happen, not that infrequently depending on the org, and they work out just fine as long as you have a competent and well organized team who can maintain a course for more than a few weeks.
Just a few hundred?
That’s seems awfully short no? We’re talking a couple hours of good flow state, that may not even be a full feature at that point 🤔
We have folks who can push out 600-1k loc covering multiple features/PRs in a day if they’re having a great day and are working somewhere they are proficient.
Never mind important refactors that might touch a thousand or a few thousand lines that might be pushed out on a daily basis, and need relatively fast turnarounds.
Essentially half of the job of writing code is also reviewing code, it really should be thought of that way.
(No, loc is not a unit of performance measurement, but it can correlate)
Too bad commenters are as bad as reading articles as LLMs are at handling complex scenarios. And are equally as confident with their comments.
This is a pretty level headed, calculated, approach DARPA is taking (as expected from DARPA).