That’s awesome, but no, they made something far more useful, lol. I’m glad to see projects like that though; it’s a lost art!
That’s awesome, but no, they made something far more useful, lol. I’m glad to see projects like that though; it’s a lost art!
Years and years ago I built my own 16 bit computer from the nand gates up. ALU, etc, all built from scratch. Wrote the assembler, then wrote a compiler for a lightweight object oriented language. Built the OS, network stack, etc. At the end of the day I had a really neat, absolutely useless computer. The knowledge was what I wanted, not a usable computer.
Building something actually useful, and modern takes so much more work. I could never even make a dent in the hour, max, I have a day outside of work and family. Plus, I worked in technology for 25 years, ended as director of engineering before fully leaving tech behind and taking a leadership position.
I’ve done so much tech work. I’m ready to spend my down time in nature, and watching birds, and skiing.
The article says that steam showing a notice on snap installs that it isn’t an official package and to report errors to snap would be extreme. But that seems pretty reasonable to me, especially since the small package doesn’t include that in its own description. Is there any reason why that would be considered extreme, in the face of higher than normal error rates with the package, and lack of appropriate package description?
I highly recommend Stephen Tetlock’s book, super forecasting, who is the sponsor of the project you mention.
One method of forecasting that he identified as effective was using a spreadsheet to record events that might occur over the next 6-18 months along with an initial probability based on good judgement and the factors you quoted. Then, every day look for new information that adjusts the forecast up or down by some, usually small percent. Repeat, and the goal is you will trend towards a reasonable %. I omitted many details but that was the jist.
Now, that’s for forecasting on a short ish timeframe. There is a place for more open ended reasoning and imagination, but you have to be careful not to fall prey to your own biases.
This particular forecast of OPs feels like it is ignoring several long running trends in technology adoption and user behavior without giving events that would address them, and forecasts something they care about doing better in the long-term, a source of bias to watch. I tend to agree with you that I think elements of this forecast are flawed.
I use a terminal whenever I’m doing work that I want to automate, is the only way to do something such as certain parameters being cli only, or when using a GUI would require additional software I don’t otherwise want.
I play games and generally do rec time in a GUI, but I do all my git and docker work from the cli.
It sounds like they are moving forward with clinical testing in partnership with a bio company, so I’m sure they withheld the information anticipating a patent. The results of this paper was the validation of the explainable AI model which identified candidate classes of compounds.
This was years ago before GPU processing really took off, and we wanted the performance, but also, wanted to see if we could develop an affordable discrete lab device that could be placed in labs to aid in computationally directed bench work. So effectively, testing the efficacy of the models and designing ASICs to perform lab tests.
It sounds like they trained a classification model using 39,000 molecules with known reactivity to MRSA. The molecules are vectorized text representations of the structures. Once trained, they can run arbitrary molecules through the model and see which ones are predicted to have antibiotic properties, or at least MRSA reactivity.
They likely fed in molecules from families of structures that seem likely to contain an antibiotic but are too numerous to manually test them all. They get a prediction of which ones are likely to have the properties they want, and then start the slow process of creating and testing the molecules in the lab.
It doesn’t sound like it but they don’t have enough detail in the article to say.
It sounds likey they are using a classification model that takes a vectorized text representation of molecules and classifies or scores them by their expected properties/reactivity. They took 39,000 molecules with known reactivity to MRSA to train the model, I assume to classify the structures. Once trained they can feed in arbitrary molecules into the trained model and see which ones are predicted to have antibiotic properties, which they can verify with bench work.
They likely fed in molecules from classes of likely candidate structures, and the model helped focus and direct the wet work.
I’m not up on the latest, but years ago I helped a similar project using FPGAs running statistical models to direct lab work.
That’s a good takeaway. AWS is the ultimate Swiss army knife, but it is easy to misconfigure. Personally, when you are first learning AWS, I wouldn’t put more data in then you are willing to pay for on the most expensive tier. AWS also gives you options to set price alerts, so if you do start playing with it, spend the time to set cost alerts so you know when something is going awry.
Have a great day!
So you just asked the most confusing thing about AWS service names due to how names changed over time.
Before S3 had an archival tier, there existed a separate service that AWS named AWS Glacier Storage, and then renamed to AWS S3 Glacier.
Around 2012 AWS started adding tiers to S3 which made the standalone service redundant. I received you look at S3 proper unless you have something like a Synology that can directly integrate with the older job based API used by the original glacier service.
So, let’s say I have a 1TB archival file, single tarball, and I upload it to a brand new S3 bucket, without version, special features, etc, except it has a life cycle policy to move objects from S3 standard to S3 Glacier instant access after 0 days. So effectively, I upload the file and it moves to Glacier class storage.
The S3 standard is ~$24/tb/month, and lets say worst case scenario our data sits on standard for one whole day before moving.
$0.77+$0.005 (API cost of the put)
Then there is the lifecycle charge to move the data from standard to glacier, with one request per object each way. Since we only have one object the cost is
$0.004 out of standard
$0.02 into glacier
The cost of glacier instant tier is $4.1/tb/month. Since we would be there all but one day, the cost on the first bill would be:
$3.95
The second month onwards you would pay just the $4.1/month unless you are constantly adding or removing.
Let’s say six months later you download your 1tb archive file. That would incur a cost of up to $30.
Now I know that seems complicated and expensive. It is, because it is providing services to me in my former role as director of engineering, with complex needs and budgets to pay for stuff. It doesn’t make sense as a large-scale backup of personal data, unless you also want to leverage other AWS services, or you are truly just dumping the data away and will likely never need to retrieve it.
S3 is great for complying with HIPAA, feeding data into a cdn, and generally dumping data around in performant way. I’ve literally dropped a petabyte off data into S3 and it just took it and did its thing.
In my personal AWS account I use S3 as a place to dump cache contents built by lambda functions and served up by API gateway. Doing stuff like that is super cheap. I also use private git repos (code commit), private container registry (ecr), and container host (ECS), and it is nice have all of that stuff just click together.
For backing up my personal computer, I use iDrive personal and OneDrive, where I don’t have to worry about the cost per object, etc. iDrive (not an Apple service) let’s you backup multiple devices to their platform and keeps them versioned.
Anyway, happy to help answer questions. Have a great day.
Thanks for posting. I just deployed to my container host in AWS ECS and it’s working well in my testing. Very easy deployment with docker.
It’s complicated. I gave the most expensive pricing, which is their fastest tier and includes stripping across three availability zones and guarantees 11 nines of data durability. Additionally, the easy integration with all other AWS services and the feature richness of S3 buckets makes it hard to do a fair apple to apple comparison unless you really have well defined needs. So I gave the highest price to keep it simple, and for someone who says they just have a few GB, any cost should be trivial.
AWS S3 has a free tier that covers the first 5Gb. I recommend it because the AWS cli is excellent, and gives you lots of options for how to sync your data. The pricing is $0.023/GB/month after the free tier. It can be overwhelming to get into AWS but it is worth it to have access to the ultimate IT service swiss army knife.
I wasn’t saying you did it for clicks. The site published an article that is a rehash of a 11 year old article. They are the ones scraping the barrel.
No idea but it definitely feels like scraping the bottom of the barrel for clicks.
Am I wrong or is this article simply re-reporting a Eurogamer article from 2012? Because the only source this article cited is a 2012 article from Eurogamer.
I run a lot of tech, containerized workloads in AWS, home firewalls running on protectli boxes for all my family around the country, wireless controllers to run APs for my family around the country, but as I got older one thing I stopped rolling my own instance of was data backups. My data backs up to OneDrive and iDrive, so two copies of my data. My wife has access to both via shared credentials in a 1password folder that she knows how to access and uses regularly.
As I got older and I had a family, the pictures of our kids, wills, financial records, insurance documents are all just too important. Every service that holds my data is paid annually for less than $200/year total and auto renews. She could call either company and prove ownership if she ever did need help getting access. Also, I can easily share folders to her.
It’s funny how getting older makes you think of the sorts of issues enterprise teams have. Don’t implement solutions where you will be one deep, have a succession plan, and complexity is the enemy. All the tech I run now is fun and helpful, but can be replaced with a trip to BestBuy. The data and pictures however must be easy to retrieve for her.
So I don’t have a good self hosted solution for you other than to say that at some point it’s ok to change your strategy. And if you are worried about privacy, you can encrypt subsets of your data locally before it is backed up.
When writing basic business code, structuring the code well and having good naming standards means you shouldn’t need a ton of comments, but you should still have some. Plus, using structured function content blocks gives you intellisense in some languages and IDEs, which is important for code reuse in teams.
However, when I was doing scientific programming I’d have comments for almost every line at times where I put the mathematical formula and operations the line represents. Implementing a convolution neutral network with parameters to dynamically scale the layers or MPI stochastic simulations is much different than writing CRUD functions or basic business logic.
With coffee
all thingsheart palpations are possible. It took me about a year and a half between work and studies. Definitely not a day. 😀