Why don't you use Cura slicer?
-
FYI, the front end (UI) in Cura is written in Python and the actual slicer that takes the models and generates the gcode is written in C++. Personally, I only work on the C++ part and the UI part is completely black magic to me. The slicer itself really isn't that slow for most models but it does depend on lots of factors. Some combinations of model features and settings will take longer to slice, that's for sure. I am partly guilty inasmuch as a lot of the stuff I have done for Cura has been to improve the quality of the gcode and that often (but not always) involves longer slicing times. I appreciate that gross slowdowns are not acceptable and a lot of my time is spent in finding quicker ways of achieving good results. In fact, I have spent most of today working on improving the speed of the infill line order optimization because it's too slow when there are many thousands of infill/skin lines to consider.
-
I appreciate the thoroughness of that reply This is so offtopic, but worth discussing.
I would prefer if Canonical staff could aggressively dogfood their own product, and that means updating as a priority; to hear otherwise is worrying.
I had a couple of crashes of the appimage with v3.0.3, nothing with 3.1 or 3.2 beta. None of them took out my base OS; nor would I expect them to; I lay the blame for that firmly on a LTS version of Ubuntu running 'a recent' kernel from a filesystem developer; and probably some other hacks that are not being mentioned. If I was doing tech support your case would be firmly closed with a 'please come back to us once you can reproduce on vanilla Ubuntu' note.
I make a clear distinction between OS and Applications here; for the OS the current package management systems are working well, and allow fine-grained continual upgrading, which is crucial for these components and security. Appimages are not going to replace these, nor have any ambitions to do so https://appimage.github.io/apps/.
In an imaginary future; Cura 9.x gets released; at which point someone needs to package it for Ubuntu, someone(else?) packages it for RHEL, someone for AUR, someone for Portage, and so on.. The debian packager needs to consider Debian, Ubuntu (current and LTS) raspian, what else?. The RHEL packager needs to consider three supported RHEL versions, corresponding CentOS versions, Fedora, Scientific Linux, others… Meanwhile something similar is happening for AUR, SuSE, portage, etc.
- But hang on; Cura 9.x uses Python6.66 which is in the latest headline releases of Ubuntu and Fedora, but never going to be available for older releases, so those users are either SOL, or you need to embed it in the package.
- This is 'the old way' and I sincerely hope we wont see it much longer.
One demonstrable upshot is that if you try 'apt install cura'; it does not work. As of this writing only the engine is available, at v 14, but no GUI. On Fedora 'dnf install cura' does somewhat better, you get the GUI and engine v 15. Arch (hilariously) has at least 10 cura packages; some building from Git; so you should be able to get 3.0+ on that but may have to work your way through the AUR packages until you find the one that is most sane. And remember; Appimages can be put inside other package systems; they are just an executable.. and that's what package managers do, install executables. Yet even this is not being done, at least not by the community or creators.
Devops need solutions to keeping the apps rolling to the untechnical masses; appimages, docker, flatpack and others are those solutions, get used to them, they are going to supersede apt and co. A simple user-facing appimage manager will emerge, appstores get integration, possibly it will move to a Docker like model capable of pulling images from distributed repositories and managing them in a local store, or it evolves into some entirely unforeseen way.. agile.
You had a list of objections that appear to stem from objectioneering to new things. Appimages are not running as root, nor running as low UID services Nor can they elevate their own privileges.. They can see what the invoking user sees and nothing else; they cannot write over system files or configs, and must store their own config in a user writable location, usually the invoking users homedir. If one ever required or asked to run as root I'd be deleting it in an instant. And since the alternatives need to be installed by running installers as root; which will then make config changes and plonk binary packages deep into the filesystem while blindly executing whatever scripts it is asked to. I think appimages win very convincingly in the security and convenience stakes.
Now think about that from Ultimakers perspective; and understand why they are going this road. it means that no admin access is required to install and run their app so long as it stays within the users sandbox. And you only need to make+distribute one package to achieve this across dozens of distributions.
–--------------------------------
Finally; I'm not a developer; quite the reverse. My day job is ensuring that our developers make packages (RPM's) that install properly, upgrade/downgrade, etc. so that the customer types a single 'yum install' command to get started. [and then spends five weeks ankle-deep in ansible issues as he tries to deploy hundreds of instances of our software, each one slightly differently configured, across a number of data centers. Our cloud platform IS easy to install and IS a nightmare to configure, that is why it costs meeelions and we insist you buy consultancy.. ]
-
I would prefer if Canonical staff could aggressively dogfood their own product, and that means updating as a priority; to hear otherwise is worrying.
You didn't hear otherwise. I said that I was running the latest LTS release on one of my personal systems. I said nothing about other systems I use – and there are many of them. Furthermore, since LTS releases are maintained in the long term, and receive bug fixes and security updates, those LTS releases must also be aggressively tested.
If I was doing tech support your case would be firmly closed with a 'please come back to us once you can reproduce on vanilla Ubuntu' note.
To which I'd reply that I was running "vanilla Ubuntu!" I said so in my post. Perhaps you misinterpreted when I wrote that I was using "the latest kernel, or at least close to it, as provided by Canonical" – I meant that I'm using a stock kernel as delivered with Ubuntu.
Appimages are not running as root, nor running as low UID services Nor can they elevate their own privileges.. They can see what the invoking user sees and nothing else; they cannot write over system files or configs, and must store their own config in a user writable location, usually the invoking users homedir. If one ever required or asked to run as root I'd be deleting it in an instant. And since the alternatives need to be installed by running installers as root; which will then make config changes and plonk binary packages deep into the filesystem while blindly executing whatever scripts it is asked to. I think appimages win very convincingly in the security and convenience stakes.
I never said anything about running Cura (or any AppImage) as root, although I did refer to using sudo or root to install it in a sensible place in the filesystem. (Putting binaries in a user's home directory is so cringe-worthy from a Unix/Linux perspective that it doesn't merit serious conversation!)
Now think about that from Ultimakers perspective; and understand why they are going this road. it means that no admin access is required to install and run their app so long as it stays within the users sandbox. And you only need to make+distribute one package to achieve this across dozens of distributions.
Yes, I understand this; it's an effort to reduce developer effort, at the cost of deviating significantly from the software distribution model used by the host OSes. From my perspective, that's a sub-optimal solution at best. Particularly if you're advocating putting the binary in users' home directories, it looks like taking a huge step backward to the days of DOS, when people intermingled program files, user data, and so on, with a need to manually update everything. There's a reason things have been moving away from that model for years, and that it was never used in the Unix world.
I suggest you stop replying now; if anything, you're making me think worse of AppImage as an application-delivery format.
Oh, and I've deleted Cura from my system.
-
No manual supports. This is definitely the biggest problem of Cura!
-
+1 for the manual supports; either a way to specify pro-actively what you want supported, or a tool to selevively delete areas of autosupport. Either would work for me.
-
Perhaps you misinterpreted when I wrote that I was using "the latest kernel, or at least close to it, as provided by Canonical" -
Yes, that. I read tha as 'my own non-stock kernel'? given you work on filesystems this seems highly likely. Obviously you intended otherwise. I suggest your experience is very atypical of the customers Cura targets.
I suggest you stop replying now; if anything, you're making me think worse of AppImage as an application-delivery format.
But I'm not trying to convince you, I'm mostly making sure that I have things straight in my own head. Remember; I'm about to lead a team of agile and competent developers away from a hundred ish RPM packages and into one appimage, or possibly five docker continers. I've also got to lead testers and others; and have the arguments ready for objectioneers. So here goes:
The big disconnect here; you are assuming that the only way people should use Cura is by installing deeply into a system and making that available to all users. This is 'the true unix way'; only heretics want to 'break' this perfect solution that has been proven by 20 years of use.
But here in reality; Along comes a competent developer and app author; they write something agile, totally up-to-date, in a architecture neutral toolset (Python/qt, for instance) and can then build their product for Windows, Mac, Unix. It;s multimedia; they use the very latest libs they can find.
- Distribution to the vast majority of customers is therefore simple and quick; the windows users get a standard installer.. Bingo; 80% of your customers happy that afternoon.
- Then you create a mac installer; another simple and well documented process; and Hurrah; 18% more of your customers happy the next day for very little effort.
Please work out the rest of the process if done via 'the true linux way'; try to grock the scope and scale of providing a Unix package + dependencies just for Ubuntu; then scale that out to other distributions (unless you are one of those Debianaholics who pretends other distros don't exist). Think of the man hours involved and the deep skillsets needed.. all for a small fraction of customers; and ones who habitually don't pay for software. There are other Unixes out there that are NOT Linux too; hpux, suse, are very heavily used by the engineering industry; one of the target customer groups.
in reality we should be very grateful to Ultimaker for providing this tool on Linux at all.. and grateful to the appimage/flatpack/snap crowd for enabling this. The alternative is the Simplify3d route; where you have a self-installing tarball, and a series of scripts to run as a superuser, or other scripts for a normal user. Oh and pay 150 moneyunits for that.
Your employers, Canonical, are slow at providing updates to common libraries in a timely manner; (and RedHat; who I am tied to professionally, are worse). This is at a point where it is holding Linux back; the application landscape on Windows is much richer and nicer than that on Linux, and the gap, which had narrowed considerably, is widening again as MS pick their game up.
For these reasons you will see more containers; appimages, snaps and flatpacks; you will see apps switching away from APT/RPM/AUR/Portage and into containers. This will accelerate; you can either learn to ride the wave; or go under.
Oh and;
I refer you, please, yet again, to the concept of an Operating System vs User Applications; I think user applications should absolutely live in user home directories and be confined to that by the OS. If written properly that is all a user application ever needs, has advantages aplenty for security and performance, and no downside apart from making users do their own install (which with a appimage can be as simple as doubleclicking an icon in a web browser).Oh, and I've deleted Cura from my system.
Your loss; I'll keep using it thanks.
Edit: Bad language.
-
Man, what a lot of words have flowed here recently. All good stuff, I'm sure.
-
Apologies @burtoogle for hijacking the thread; I'll shut up on that now..
To be more ontopic; as a newbie I found Cura's slicing excellent, in particular the way it handles vase mode made me switch to it from slic3r. A bit more control of supports would be nice. And that is all I can find to comment on regarding how it slices models, the missing polygon issue I saw in a model with 3.0x appears resolved in 3.1 and 3.2.
-
Hi @EasyTarget, I don't mind the thread wandering around a bit, it's all relevant. Manual supports is a hot topic over at Cura towers these days but, as I mention above, I only work on the back end so I don't really get involved in any of that. Glad you like the vase mode, improving that so that the seam gets well hidden was one of my contributions.
-
Edit: sorry for the edits…
An odd argument; and I apologize for being confrontational because srs5694 has many good points and is, in a very real sense, right. Especially about the desirability of having one solution; and breaking software down into individual items that can be individually upgraded for desktop users and static servers, and doing so in standard manners.
But; containerization technology (*) is finally becoming 'a thing'; Flatpack and Snaps are the related tech that somewhat address the problem of 'yet another standard' (https://xkcd.com/927/) because they will integrate well into deb and rpm based package systems (they are developed by Canonical and RedHat), but can also be deployed as web objects via a URI. This discussion is a small reflection on a wider discussion happening in the devops/developer world. It's my bread and butter, I probably care too much about it.
–-----------------------------------------
(*) In case anybody is wondering; an appimage is simply a ISO disk image which is executable; when you 'run' an appimage it actually remounts itself via Fuse in /tmp and then executes the payload in that mounted read-only filesystem. This is why 'installing' them makes no real sense, you plonk the appimage anywhere you like and execute it as a non-root user, it will run from /tmp unless you force it elsewhere. Installing to the OS simply allows all users to do this easily; but for a one-user workstation it could just as easily be dropped on the desktop and doubleclicked on demand.
Flatpack and Snap are similar, with variations; and Docker, my preferred tool, takes this a step further by providing process, disk and network containers, more complex to manage but more powerful in terms of isolating a process from it's host.
And so I'm floating on clouds professionally; the OS is reduced to a simple base that I layer containers on top of to actually do stuff; and dont give a toss about the underlying OS flavor or it's update systems. We stay up to date by mirroring all changes to the upstream images, and simply kill+replace the old version. It's utterly alien to a traditional Linux desktop user experience. This is deeply related to cloud deployments and other OS virtualization and containerization techs.
As for why we do this; With containers we can spin up a very small cloud OS image, slap a docker container on it, and have it serving data in under a second; starting from the same image, installing and starting apps with yum takes minutes, even with a local repo. Starting a blank instance, installing with anaconda/kickstart, then updating, then installing the app takes 10. Now imagine you need to scale transcoding during the ads for the Superbowl; suddenly going from a few dozen streams to several tens of thousands as everybody changes channels and hits the guide. That is the problem containers solve for us and our customers; we can respond in real time instead of needing to estimate and pre-provision.
The ability of containers to standardize distribution for developers and free them from OS imposed restrictions is just an added bonus.
-
improving that so that the seam gets well hidden was one of my contributions.
there is a vase on my desk right now, I look at it and think ?what seam?, impressive!
And to all the slicer devs out there; as someone who can prod a computer to do things; but cant program for salt, I'd like to be appreciative.
- Database and network apps etc are all very well, but I can see how they work. When software can take a (usually broken) mathematical model and realize that into a series of co-ordinates and moves between them according to complex plan, that's actually quite impressive. I cant even see HOW you do that in the first place.. image recognition, 3d scans from photos, etc all make me feel the same way, and rather humble considering how hard I found it to even write a 10 line arduino sketch.
-
Hi @EasyTarget, I don't mind the thread wandering around a bit, it's all relevant. Manual supports is a hot topic over at Cura towers these days but, as I mention above, I only work on the back end so I don't really get involved in any of that. Glad you like the vase mode, improving that so that the seam gets well hidden was one of my contributions.
Is manual support placement a front end only problem to solve with Cura? I mean does the front end determine where the support should be so it can be modified to ask for support to be elsewhere as the user requires? If so then it seams like a specific job for a developer that could be packaged up and opened to the community for funding to have it implemented?
-
Is manual support placement a front end only problem to solve with Cura? I mean does the front end determine where the support should be so it can be modified to ask for support to be elsewhere as the user requires? If so then it seams like a specific job for a developer that could be packaged up and opened to the community for funding to have it implemented?
As far as I am aware, support generation in the slicer (backend) is controlled by volumes (they call them meshes) that define the regions that are to contain support. It is already possible in the front end to define meshes that either specify where support should be added (and it wasn't added automatically) or regions where support should not be added. So, in a rather clunky fashion it is already possible to manually define where the support is to be generated. Assuming that basic functionality stays, the problem then boils down to having some GUI capability for the user to be able to easily add/subtract support (like s3d's pillars o'shite which you can create/remove). I think the majority of the work that needs to be done to have manual support is in the front end.
Incidentally, I shall be meeting some of the Cura devs later this month so I shall be sure to quiz them as to how the manual support capability is coming along.
-
I would like semi-auto support removal. I would like the UI to let me choose to start from fully generated support and I selectively remove, or start from no support and I selectively add. You need both, as we know it all depends on the model and your printer!
In reference to the crashes under linux, I'd say that in my experience it's the python libs that cause the problems. For example, I try add a config for a dual nozzle printer and bang - 2.7 just dies. Works fine for single nozzle configs. I mean really? Adding a dual nozzle config causes it to crash? That is pretty bonkers.
I try to use the PPA Master variants if I can, but I generally can't as they crash a lot. Latest PPA Master (3.2) for example crashed without starting the UI on Ubuntu LTS! Personally I don't upgrade LTS systems until Ubuntu tell me to. And it generally works really well.
I file my bug reports when I can, I have become accustomed to Cura and like it. I have no real issues with the slicing quality. The bridging could be better, but I'd really far rather have manual support removal!
-
Incidentally, I shall be meeting some of the Cura devs later this month so I shall be sure to quiz them as to how the manual support capability is coming along.
Thanks, will be interesting to see what they say!
-
I just published a new Cura-DuetRRF plugin version to fix an incompatibility with Cura 3.2:
https://github.com/Kriechi/Cura-DuetRRFPlugin/releases/tag/v0.0.5This plugin allows you to directly upload a g-code file to a DuetWifi/DuetEthernet/and older Duets and start the print.
You can even "just upload", "upload & print", or "upload & simulate" - all from within Cura.
https://github.com/Kriechi/Cura-DuetRRFPlugin -
resam, thanks for the hard work!!! Just FYI it crashes on Cura 3.2.1. It happens on print or simulate.
Is it possible to make this plugin work for the monitor screen and allow movement and temp changes?
-
meh - I can't keep up with Cura releases
I'll take a look in the next days… -
Since I just butted my head against it here is stupid, trivial UI suggestion; a way to 'clone' a printer definition.
At the moment the only way is to add a New printer via the wizard/dialog, but I wanted to make a new config for my machine while it is (temporarily) wearing a much bigger nozzle.
I did it by following the dialog, then copy + pasting start/stop gcodes from the old config, and it was rather tedious. [Maybe I culd have hacked it more easily directly in the config but I wasnt in the mood]. It was trivial to do this in slic3r, so I was a bit narked when the same operation in Cura turned into a 'open,copy,close,open new,paste,close,open old,copy next' session -